id
int64 12
1.07M
| title
stringlengths 1
124
| text
stringlengths 0
228k
| paragraphs
list | abstract
stringlengths 0
123k
| date_created
stringlengths 0
20
| date_modified
stringlengths 20
20
| templates
sequence | url
stringlengths 31
154
|
---|---|---|---|---|---|---|---|---|
7,023 | Confederate States of America | The Confederate States of America (CSA), commonly referred to as the Confederate States (C.S.), the Confederacy, or the South, was an unrecognized breakaway republic in the Southern United States that existed from February 8, 1861, to May 9, 1865. The Confederacy comprised eleven U.S. states that declared secession and warred against the United States during the American Civil War. The states were South Carolina, Mississippi, Florida, Alabama, Georgia, Louisiana, Texas, Virginia, Arkansas, Tennessee, and North Carolina.
The Confederacy was formed on February 8, 1861, by seven slave states: South Carolina, Mississippi, Florida, Alabama, Georgia, Louisiana, and Texas. All seven states were in the Deep South region of the United States, whose economy was heavily dependent upon agriculture, especially cotton, and a plantation system that relied upon enslaved Americans of African descent for labor. Convinced that white supremacy and slavery were threatened by the November 1860 election of Republican Abraham Lincoln to the U.S. presidency on a platform that opposed the expansion of slavery into the western territories, the seven slave states seceded from the United States, with the loyal states becoming known as the Union during the ensuing American Civil War. In the Cornerstone Speech, Confederate Vice President Alexander H. Stephens described its ideology as centrally based "upon the great truth that the negro is not equal to the white man; that slavery, subordination to the superior race, is his natural and normal condition."
Before Lincoln took office on March 4, 1861, a provisional Confederate government was established on February 8, 1861. It was considered illegal by the United States government, and Northerners thought of the Confederates as traitors. After war began in April, four slave states of the Upper South—Virginia, Arkansas, Tennessee, and North Carolina—also joined the Confederacy. Four slave states, Delaware, Maryland, Kentucky, and Missouri, remained in the Union and became known as border states. The Confederacy nevertheless recognized two of them, Missouri and Kentucky, as members, accepting rump state assembly declarations of secession as authorization for full delegations of representatives and senators in the Confederate Congress. In the early part of the Civil War, the Confederacy controlled and governed more than half of Kentucky and the southern portion of Missouri, but these states were never substantially controlled by Confederate forces after 1862, despite the efforts of Confederate shadow governments, which were eventually defeated and expelled from both states. The Union rejected the claims of secession as illegitimate, while the Confederacy fully recognized them.
The Civil War began on April 12, 1861, when the Confederates attacked Fort Sumter, a Union fort in the harbor of Charleston, South Carolina. No foreign government ever recognized the Confederacy as an independent country, although Great Britain and France granted it belligerent status, which allowed Confederate agents to contract with private concerns for weapons and other supplies. By 1865, the Confederacy's civilian government dissolved into chaos: the Confederate States Congress adjourned sine die, effectively ceasing to exist as a legislative body on March 18. After four years of heavy fighting, nearly all Confederate land and naval forces either surrendered or otherwise ceased hostilities by May 1865. The war lacked a clean end date, with Confederate forces surrendering or disbanding sporadically throughout most of 1865. The most significant capitulation was Confederate general Robert E. Lee's surrender to Ulysses S. Grant at Appomattox on April 9, after which any doubt about the war's outcome or the Confederacy's survival was extinguished, although another large army under Confederate general Joseph E. Johnston did not formally surrender to William T. Sherman until April 26. Contemporaneously, President Lincoln was assassinated by Confederate sympathizer John Wilkes Booth on April 15. Confederate President Jefferson Davis's administration declared the Confederacy dissolved on May 5, and acknowledged in later writings that the Confederacy "disappeared" in 1865. On May 9, 1865, U.S. President Andrew Johnson officially called an end to the armed resistance in the South.
After the war, during the Reconstruction era, the Confederate states were readmitted to the Congress after each ratified the 13th Amendment to the U.S. Constitution outlawing slavery. Lost Cause mythology, an idealized view of the Confederacy valiantly fighting for a just cause, emerged in the decades after the war among former Confederate generals and politicians, and in organizations such as the United Daughters of the Confederacy and the Sons of Confederate Veterans. Intense periods of Lost Cause activity developed around the turn of the 20th century and during the civil rights movement of the 1950s and 1960s in reaction to growing support for racial equality. Advocates sought to ensure future generations of Southern whites would continue to support white supremacist policies such as the Jim Crow laws through activities such as building Confederate monuments and influencing the authors of textbooks to write on Lost Cause ideology. The modern display of Confederate flags primarily started during the 1948 presidential election, when the battle flag was used by the Dixiecrats. During the Civil Rights Movement, segregationists used it for demonstrations.
On February 22, 1862, the Confederate States Constitution of seven state signatories—Mississippi, South Carolina, Florida, Alabama, Georgia, Louisiana, and Texas—replaced the Provisional Constitution of February 8, 1861, with one stating in its preamble a desire for a "permanent federal government". Four additional slave-holding states—Virginia, Arkansas, Tennessee, and North Carolina—declared their secession and joined the Confederacy following a call by U.S. President Abraham Lincoln for troops from each state to recapture Sumter and other seized federal properties in the South.
Missouri and Kentucky were represented by partisan factions adopting the forms of state governments in the Confederate government of Missouri and Confederate government of Kentucky, and the Confederacy controlled more than half of Kentucky and the southern portion of Missouri early in the war. Neither state's Confederate governments controlled any substantial territory or population in either case after 1862. The antebellum state governments in both maintained their representation in the Union. Also fighting for the Confederacy were two of the "Five Civilized Tribes"—the Choctaw and the Chickasaw—in Indian Territory, and a new, but uncontrolled, Confederate Territory of Arizona. Efforts by certain factions in Maryland to secede were halted by federal imposition of martial law; Delaware, though of divided loyalty, did not attempt it. A Unionist government was formed in opposition to the secessionist state government in Richmond and administered the western parts of Virginia that had been occupied by Federal troops. The Restored Government of Virginia later recognized the new state of West Virginia, which was admitted to the Union during the war on June 20, 1863, and relocated to Alexandria for the rest of the war.
Confederate control over its claimed territory and population in congressional districts steadily shrank from three-quarters to a third during the American Civil War due to the Union's successful overland campaigns, its control of inland waterways into the South, and its blockade of the southern coast. With the Emancipation Proclamation on January 1, 1863, the Union made abolition of slavery a war goal (in addition to reunion). As Union forces moved southward, large numbers of plantation slaves were freed. Many joined the Union lines, enrolling in service as soldiers, teamsters and laborers. The most notable advance was Sherman's "March to the Sea" in late 1864. Much of the Confederacy's infrastructure was destroyed, including telegraphs, railroads, and bridges. Plantations in the path of Sherman's forces were severely damaged. Internal movement within the Confederacy became increasingly difficult, weakening its economy and limiting army mobility.
These losses created an insurmountable disadvantage in men, materiel, and finance. Public support for Confederate President Jefferson Davis's administration eroded over time due to repeated military reverses, economic hardships, and allegations of autocratic government. After four years of campaigning, Richmond was captured by Union forces in April 1865. A few days later General Robert E. Lee surrendered to Union General Ulysses S. Grant, effectively signaling the collapse of the Confederacy. President Davis was captured on May 10, 1865, and jailed for treason, but no trial was ever held.
The Confederacy was established by the Montgomery Convention in February 1861 by seven states (South Carolina, Mississippi, Alabama, Florida, Georgia, Louisiana, adding Texas in March before Lincoln's inauguration), expanded in May–July 1861 (with Virginia, Arkansas, Tennessee, North Carolina), and disintegrated in April–May 1865. It was formed by delegations from seven slave states of the Lower South that had proclaimed their secession from the Union. After the fighting began in April, four additional slave states seceded and were admitted. Later, two slave states (Missouri and Kentucky) and two territories were given seats in the Confederate Congress.
Its establishment flowed from and deepened Southern nationalism, which prepared men to fight for "The Southern Cause". This "Cause" included support for states' rights, tariff policy, and internal improvements, but above all, cultural and financial dependence on the South's slavery-based economy. The convergence of race and slavery, politics, and economics raised almost all South-related policy questions to the status of moral questions over way of life, merging love of things Southern and hatred of things Northern. As the war approached, political parties split, and national churches and interstate families divided along sectional lines. According to historian John M. Coski:
The statesmen who led the secession movement were unashamed to explicitly cite the defense of slavery as their prime motive ... Acknowledging the centrality of slavery to the Confederacy is essential for understanding the Confederate.
Southern Democrats had chosen John Breckinridge as their candidate during the U.S. presidential election of 1860, but in no Southern state (other than South Carolina, where the legislature chose the electors) was support for him unanimous, as all of the other states recorded at least some popular votes for one or more of the other three candidates (Abraham Lincoln, Stephen A. Douglas and John Bell). Support for these candidates, collectively, ranged from significant to an outright majority, with extremes running from 25% in Texas to 81% in Missouri. There were minority views everywhere, especially in the upland and plateau areas of the South, being particularly concentrated in western Virginia and eastern Tennessee. The first six signatory states establishing the Confederacy counted about one-fourth its population. They voted 43% for pro-Union candidates. The four states which entered after the attack on Fort Sumter held almost half the population of the Confederacy and voted 53% for pro-Union candidates. The three big turnout states voted extremes. Texas, with 5% of the population, voted 20% for pro-Union candidates. Kentucky and Missouri, with one-fourth the Confederate population, voted a combined 68% for the pro-Union Lincoln, Douglas, and Bell.
Following South Carolina's unanimous 1860 secession vote, no other Southern states considered the question until 1861, and when they did none had a unanimous vote. All had residents who cast significant numbers of Unionist votes in either the legislature, conventions, popular referendums, or in all three. Voting to remain in the Union did not necessarily mean that individuals were sympathizers with the North. Once fighting began, many of these who voted to remain in the Union, particularly in the Deep South, accepted the majority decision, and supported the Confederacy.
Many writers have evaluated the Civil War as an American tragedy—a "Brothers' War", pitting "brother against brother, father against son, kin against kin of every degree".
According to historian Avery O. Craven in 1950, the Confederate States of America nation, as a state power, was created by secessionists in Southern slave states, who believed that the federal government was making them second-class citizens. They judged the agents of change to be abolitionists and anti-slavery elements in the Republican Party, whom they believed used repeated insult and injury to subject them to intolerable "humiliation and degradation". The "Black Republicans" (as the Southerners called them) and their allies soon dominated the U.S. House, Senate, and Presidency. On the U.S. Supreme Court, Chief Justice Roger B. Taney (a presumed supporter of slavery) was 83 years old and ailing.
During the campaign for president in 1860, some secessionists threatened disunion should Lincoln (who opposed the expansion of slavery into the territories) be elected, including William L. Yancey. Yancey toured the North calling for secession as Stephen A. Douglas toured the South calling for union if Lincoln was elected. To the secessionists the Republican intent was clear: to contain slavery within its present bounds and, eventually, to eliminate it entirely. A Lincoln victory presented them with a momentous choice (as they saw it), even before his inauguration—"the Union without slavery, or slavery without the Union".
The new [Confederate] Constitution has put at rest forever all the agitating questions relating to our peculiar institutions—African slavery as it exists among us—the proper status of the negro in our form of civilization. This was the immediate cause of the late rupture and present revolution. Jefferson, in his forecast, had anticipated this, as the "rock upon which the old Union would split." He was right. What was conjecture with him, is now a realized fact. But whether he fully comprehended the great truth upon which that rock stood and stands, may be doubted.
The prevailing ideas entertained by him and most of the leading statesmen at the time of the formation of the old Constitution were, that the enslavement of the African was in violation of the laws of nature; that it was wrong in principle, socially, morally and politically. It was an evil they knew not well how to deal with; but the general opinion of the men of that day was, that, somehow or other, in the order of Providence, the institution would be evanescent and pass away... Those ideas, however, were fundamentally wrong. They rested upon the assumption of the equality of races. This was an error. It was a sandy foundation, and the idea of a Government built upon it—when the "storm came and the wind blew, it fell."
Our new government is founded upon exactly the opposite ideas; its foundations are laid, its cornerstone rests, upon the great truth that the negro is not equal to the white man; that slavery, subordination to the superior race, is his natural and normal condition. This, our new government, is the first, in the history of the world, based upon this great physical, philosophical, and moral truth.
Alexander H. Stephens, speech to The Savannah Theatre. (March 21, 1861)
The immediate catalyst for secession was the victory of the Republican Party and the election of Abraham Lincoln as president in the 1860 elections. American Civil War historian James M. McPherson suggested that, for Southerners, the most ominous feature of the Republican victories in the congressional and presidential elections of 1860 was the magnitude of those victories: Republicans captured over 60 percent of the Northern vote and three-fourths of its Congressional delegations. The Southern press said that such Republicans represented the anti-slavery portion of the North, "a party founded on the single sentiment ... of hatred of African slavery", and now the controlling power in national affairs. The "Black Republican party" could overwhelm the status of white supremacy in the South. The New Orleans Delta said of the Republicans, "It is in fact, essentially, a revolutionary party" to overthrow slavery. By 1860, sectional disagreements between North and South concerned primarily the status of slavery in the United States. The specific question at issue was whether slavery would be permitted to expand into the western territories, leading to more slave states, or be prevented from doing so, which was widely believed would place slavery on a course of ultimate extinction. Historian Drew Gilpin Faust observed that "leaders of the secession movement across the South cited slavery as the most compelling reason for southern independence". Although most white Southerners did not own slaves, the majority supported the institution of slavery and benefited indirectly from the slave society. For struggling yeomen and subsistence farmers, the slave society provided a large class of people ranked lower in the social scale than themselves. Secondary differences related to issues of free speech, runaway slaves, expansion into Cuba, and states' rights.
Historian Emory Thomas assessed the Confederacy's self-image by studying correspondence sent by the Confederate government in 1861–62 to foreign governments. He found that Confederate diplomacy projected multiple contradictory self-images:
The Southern nation was by turns a guileless people attacked by a voracious neighbor, an 'established' nation in some temporary difficulty, a collection of bucolic aristocrats making a romantic stand against the banalities of industrial democracy, a cabal of commercial farmers seeking to make a pawn of King Cotton, an apotheosis of nineteenth-century nationalism and revolutionary liberalism, or the ultimate statement of social and economic reaction.
The Cornerstone Speech is frequently cited in analysis surrounding Confederate ideology. In it, Confederate Vice President Alexander H. Stephens declared that the "cornerstone" of the new government "rest[ed] upon the great truth that the negro is not equal to the white man; that slavery—subordination to the superior race—is his natural and normal condition. This, our new government, is the first, in the history of the world, based upon this great physical, philosophical, and moral truth". Stephens' speech criticized "most" of the Founding Fathers for their views on slavery, accusing them of erroneously assuming that races are equal. He declared that disagreements over the enslavement of African Americans were the "immediate cause" of secession and that the Confederate constitution had resolved such issues. Stephens contended that advances and progress in the sciences proved that the Declaration of Independence's view that "all men are created equal" was erroneous, while stating that the Confederacy was the first country in the world founded on the principle of white supremacy and that chattel slavery coincided with the Bible's teachings. After the Confederacy's defeat at the hands of the U.S. in the Civil War and the abolition of slavery, he attempted to retroactively deny and retract the opinions he had stated in the speech. Denying his earlier statements that slavery was the Confederacy's cause for leaving the Union, he contended to the contrary that he thought that the war was rooted in constitutional differences; this explanation by Stephens is widely rejected by historians.
Four of the seceding states, the Deep South states of South Carolina, Mississippi, Georgia, and Texas, issued formal declarations of the causes of their decision; each identified the threat to slaveholders' rights as the cause of, or a major cause of, secession. Georgia also claimed a general Federal policy of favoring Northern over Southern economic interests. Texas mentioned slavery 21 times, but also listed the failure of the federal government to live up to its obligations, in the original annexation agreement, to protect settlers along the exposed western frontier. Texas resolutions further stated that governments of the states and the nation were established "exclusively by the white race, for themselves and their posterity". They also stated that although equal civil and political rights applied to all white men, they did not apply to those of the "African race", further opining that the end of racial enslavement would "bring inevitable calamities upon both [races] and desolation upon the fifteen slave-holding states".
Alabama did not provide a separate declaration of causes. Instead, the Alabama ordinance stated "the election of Abraham Lincoln ... by a sectional party, avowedly hostile to the domestic institutions and to the peace and security of the people of the State of Alabama, preceded by many and dangerous infractions of the Constitution of the United States by many of the States and people of the northern section, is a political wrong of so insulting and menacing a character as to justify the people of the State of Alabama in the adoption of prompt and decided measures for their future peace and security". The ordinance invited "the slaveholding States of the South, who may approve such purpose, in order to frame a provisional as well as a permanent Government upon the principles of the Constitution of the United States" to participate in a February 4, 1861 convention in Montgomery, Alabama.
The secession ordinances of the remaining two states, Florida and Louisiana, simply declared their severing ties with the federal Union, without stating any causes. Afterward, the Florida secession convention formed a committee to draft a declaration of causes, but the committee was discharged before completion of the task. Only an undated, untitled draft remains.
Four of the Upper South states (Virginia, Arkansas, Tennessee, and North Carolina) rejected secession until after the clash at Ft. Sumter. Virginia's ordinance stated a kinship with the slave-holding states of the Lower South, but did not name the institution itself as a primary reason for its course.
Arkansas's secession ordinance encompassed a strong objection to the use of military force to preserve the Union as its motivating reason. Before the outbreak of war, the Arkansas Convention had on March 20 given as their first resolution: "The people of the Northern States have organized a political party, purely sectional in its character, the central and controlling idea of which is hostility to the institution of African slavery, as it exists in the Southern States; and that party has elected a President ... pledged to administer the Government upon principles inconsistent with the rights and subversive of the interests of the Southern States."
North Carolina and Tennessee limited their ordinances to simply withdrawing, although Tennessee went so far as to make clear they wished to make no comment at all on the "abstract doctrine of secession".
In a message to the Confederate Congress on April 29, 1861, Jefferson Davis cited both the tariff and slavery for the South's secession.
The pro-slavery "Fire-Eaters" group of Southern Democrats, calling for immediate secession, were opposed by two factions. "Cooperationists" in the Deep South would delay secession until several states left the union, perhaps in a Southern Convention. Under the influence of men such as Texas Governor Sam Houston, delay would have the effect of sustaining the Union. "Unionists", especially in the Border South, often former Whigs, appealed to sentimental attachment to the United States. Southern Unionists' favorite presidential candidate was John Bell of Tennessee, sometimes running under an "Opposition Party" banner.
Many secessionists were active politically. Governor William Henry Gist of South Carolina corresponded secretly with other Deep South governors, and most southern governors exchanged clandestine commissioners. Charleston's secessionist "1860 Association" published over 200,000 pamphlets to persuade the youth of the South. The most influential were: "The Doom of Slavery" and "The South Alone Should Govern the South", both by John Townsend of South Carolina; and James D. B. De Bow's "The Interest of Slavery of the Southern Non-slaveholder".
Developments in South Carolina started a chain of events. The foreman of a jury refused the legitimacy of federal courts, so Federal Judge Andrew Magrath ruled that U.S. judicial authority in South Carolina was vacated. A mass meeting in Charleston celebrating the Charleston and Savannah railroad and state cooperation led to the South Carolina legislature to call for a Secession Convention. U.S. Senator James Chesnut, Jr. resigned, as did Senator James Henry Hammond.
Elections for Secessionist conventions were heated to "an almost raving pitch, no one dared dissent", according to historian William W. Freehling. Even once–respected voices, including the Chief Justice of South Carolina, John Belton O'Neall, lost election to the Secession Convention on a Cooperationist ticket. Across the South mobs expelled Yankees and (in Texas) executed German-Americans suspected of loyalty to the United States. Generally, seceding conventions which followed did not call for a referendum to ratify, although Texas, Arkansas, Tennessee, and Virginia's second convention did. Kentucky declared neutrality, while Missouri had its own civil war until the Unionists took power and drove the Confederate legislators out of the state.
In February, 1861, leading politicians from northern states and border states that had yet to secede met in Washington, DC, for the Peace Conference of 1861. Attendees rejected the Crittenden Compromise and other proposals. Eventually it proposed the Corwin Amendment to the Congress to bring the seceding states back to the Union and to convince the border slave states to remain. It was a proposed amendment to the United States Constitution by Ohio Congressman Thomas Corwin that would shield "domestic institutions" of the states (which in 1861 included slavery) from the constitutional amendment process and from abolition or interference by Congress.
It was passed by the 36th Congress on March 2, 1861. The House approved it by a vote of 133 to 65 and the United States Senate adopted it, with no changes, on a vote of 24 to 12. It was then submitted to the state legislatures for ratification. In his inaugural address Lincoln endorsed the proposed amendment.
The text was as follows:
No amendment shall be made to the Constitution which will authorize or give to Congress the power to abolish or interfere, within any State, with the domestic institutions thereof, including that of persons held to labor or service by the laws of said State.
Had it been ratified by the required number of states prior to 1865, on its face it would have made institutionalized slavery immune to the constitutional amendment procedures and to interference by Congress.
The first secession state conventions from the Deep South sent representatives to meet at the Montgomery Convention in Montgomery, Alabama, on February 4, 1861. There the fundamental documents of government were promulgated, a provisional government was established, and a representative Congress met for the Confederate States of America.
The new provisional Confederate President Jefferson Davis issued a call for 100,000 men from the various states' militias to defend the newly formed Confederacy. All Federal property was seized, along with gold bullion and coining dies at the U.S. mints in Charlotte, North Carolina; Dahlonega, Georgia; and New Orleans. The Confederate capital was moved from Montgomery to Richmond, Virginia, in May 1861. On February 22, 1862, Davis was inaugurated as president with a term of six years.
The newly inaugurated Confederate administration pursued a policy of national territorial integrity, continuing earlier state efforts in 1860 and early 1861 to remove U.S. government presence from within their boundaries. These efforts included taking possession of U.S. courts, custom houses, post offices, and most notably, arsenals and forts. But after the Confederate attack and capture of Fort Sumter in April 1861, Lincoln called up 75,000 of the states' militia to muster under his command. The stated purpose was to re-occupy U.S. properties throughout the South, as the U.S. Congress had not authorized their abandonment. The resistance at Fort Sumter signaled his change of policy from that of the Buchanan Administration. Lincoln's response ignited a firestorm of emotion. The people of both North and South demanded war, with soldiers rushing to their colors in the hundreds of thousands. Four more states (Virginia, North Carolina, Tennessee, and Arkansas) refused Lincoln's call for troops and declared secession, while Kentucky maintained an uneasy "neutrality".
Secessionists argued that the United States Constitution was a contract among sovereign states that could be abandoned at any time without consultation and that each state had a right to secede. After intense debates and statewide votes, seven Deep South cotton states passed secession ordinances by February 1861 (before Abraham Lincoln took office as president), while secession efforts failed in the other eight slave states. Delegates from those seven formed the CSA in February 1861, selecting Jefferson Davis as the provisional president. Unionist talk of reunion failed and Davis began raising a 100,000-man army.
Initially, some secessionists may have hoped for a peaceful departure. Moderates in the Confederate Constitutional Convention included a provision against importation of slaves from Africa to appeal to the Upper South. Non-slave states might join, but the radicals secured a two-thirds requirement in both houses of Congress to accept them.
Seven states declared their secession from the United States before Lincoln took office on March 4, 1861. After the Confederate attack on Fort Sumter April 12, 1861, and Lincoln's subsequent call for troops on April 15, four more states declared their secession:
Kentucky declared neutrality, but after Confederate troops moved in, the state legislature asked for Union troops to drive them out. Delegates from 68 Kentucky counties were sent to the Russellville Convention that signed an Ordinance of Secession. Kentucky was formally admitted into the Confederacy on December 10, 1861, with Bowling Green as its first capital. Early in the war, the Confederacy controlled more than half of Kentucky but largely lost control of the state in 1862. The splinter Confederate government of Kentucky relocated to accompany western Confederate armies and never controlled the state population after 1862. By the end of the war, 90,000 Kentuckians had fought on the side of the Union, compared to 35,000 for the Confederacy.
In Missouri, a constitutional convention was approved and delegates elected by voters. The convention rejected secession 89–1 on March 19, 1861. The governor maneuvered to take control of the St. Louis Arsenal and restrict Federal movements. This led to a confrontation, and in June federal forces drove him and the General Assembly from Jefferson City. The executive committee of the constitutional convention called the members together in July. The convention declared the state offices vacant and appointed a Unionist interim state government. The exiled governor called a rump session of the former General Assembly together in Neosho and, on October 31, 1861, it passed an ordinance of secession. It is still a matter of debate as to whether a quorum existed for this vote. The Confederate state government was unable to control substantial parts of Missouri territory, effectively only controlling southern Missouri early in the war. It had its capital first at Neosho, then at Cassville, before being driven out of the state. For the remainder of the war, it operated as a government in exile at Marshall, Texas.
Not having seceded, neither Kentucky nor Missouri was declared in rebellion in Lincoln's Emancipation Proclamation. The Confederacy recognized the pro-Confederate claimants in both Kentucky (December 10, 1861) and Missouri (November 28, 1861) and laid claim to those states, granting them Congressional representation and adding two stars to the Confederate flag. Voting for the representatives was mostly done by Confederate soldiers from Kentucky and Missouri.
The order of secession resolutions and dates are:
In Virginia, the populous counties along the Ohio and Pennsylvania borders rejected the Confederacy. Unionists held a Convention in Wheeling in June 1861, establishing a "restored government" with a rump legislature, but sentiment in the region remained deeply divided. In the 50 counties that would make up the state of West Virginia, voters from 24 counties had voted for disunion in Virginia's May 23 referendum on the ordinance of secession. In the 1860 Presidential election "Constitutional Democrat" Breckenridge had outpolled "Constitutional Unionist" Bell in the 50 counties by 1,900 votes, 44% to 42%. Regardless of scholarly disputes over election procedures and results county by county, altogether they simultaneously supplied over 20,000 soldiers to each side of the conflict. Representatives for most of the counties were seated in both state legislatures at Wheeling and at Richmond for the duration of the war.
Attempts to secede from the Confederacy by some counties in East Tennessee were checked by martial law. Although slaveholding Delaware and Maryland did not secede, citizens from those states exhibited divided loyalties. Regiments of Marylanders fought in Lee's Army of Northern Virginia. Overall, 24,000 men from Maryland joined the Confederate armed forces, compared to 63,000 who joined Union forces.
Delaware never produced a full regiment for the Confederacy, but neither did it emancipate slaves as did Missouri and West Virginia. District of Columbia citizens made no attempts to secede and through the war years, referendums sponsored by President Lincoln approved systems of compensated emancipation and slave confiscation from "disloyal citizens".
Citizens at Mesilla and Tucson in the southern part of New Mexico Territory formed a secession convention, which voted to join the Confederacy on March 16, 1861, and appointed Dr. Lewis S. Owings as the new territorial governor. They won the Battle of Mesilla and established a territorial government with Mesilla serving as its capital. The Confederacy proclaimed the Confederate Arizona Territory on February 14, 1862, north to the 34th parallel. Marcus H. MacWillie served in both Confederate Congresses as Arizona's delegate. In 1862, the Confederate New Mexico Campaign to take the northern half of the U.S. territory failed and the Confederate territorial government in exile relocated to San Antonio, Texas.
Confederate supporters in the trans-Mississippi west also claimed portions of the Indian Territory after the United States evacuated the federal forts and installations. Over half of the American Indian troops participating in the Civil War from the Indian Territory supported the Confederacy; troops and one general were enlisted from each tribe. On July 12, 1861, the Confederate government signed a treaty with both the Choctaw and Chickasaw Indian nations. After several battles, Union armies took control of the territory.
The Indian Territory never formally joined the Confederacy, but it did receive representation in the Confederate Congress. Many Indians from the Territory were integrated into regular Confederate Army units. After 1863, the tribal governments sent representatives to the Confederate Congress: Elias Cornelius Boudinot representing the Cherokee and Samuel Benton Callahan representing the Seminole and Creek. The Cherokee Nation aligned with the Confederacy. They practiced and supported slavery, opposed abolition, and feared their lands would be seized by the Union. After the war, the Indian territory was disestablished, their black slaves were freed, and the tribes lost some of their lands.
Montgomery, Alabama, served as the capital of the Confederate States of America from February 4 until May 29, 1861, in the Alabama State Capitol. Six states created the Confederate States of America there on February 8, 1861. The Texas delegation was seated at the time, so it is counted in the "original seven" states of the Confederacy; it had no roll call vote until after its referendum made secession "operative". Two sessions of the Provisional Congress were held in Montgomery, adjourning May 21. The Permanent Constitution was adopted there on March 12, 1861.
The permanent capital provided for in the Confederate Constitution called for a state cession of a 100 square mile district to the central government. Atlanta, which had not yet supplanted Milledgeville, Georgia, as its state capital, put in a bid noting its central location and rail connections, as did Opelika, Alabama, noting its strategically interior situation, rail connections and nearby deposits of coal and iron.
Richmond, Virginia, was chosen for the interim capital at the Virginia State Capitol. The move was used by Vice President Stephens and others to encourage other border states to follow Virginia into the Confederacy. In the political moment it was a show of "defiance and strength". The war for Southern independence was surely to be fought in Virginia, but it also had the largest Southern military-aged white population, with infrastructure, resources, and supplies required to sustain a war. The Davis Administration's policy was that "It must be held at all hazards."
The naming of Richmond as the new capital took place on May 30, 1861, and the last two sessions of the Provisional Congress were held in the new capital. The Permanent Confederate Congress and President were elected in the states and army camps on November 6, 1861. The First Congress met in four sessions in Richmond from February 18, 1862, to February 17, 1864. The Second Congress met there in two sessions, from May 2, 1864, to March 18, 1865.
As war dragged on, Richmond became crowded with training and transfers, logistics and hospitals. Prices rose dramatically despite government efforts at price regulation. A movement in Congress led by Henry S. Foote of Tennessee argued for moving the capital from Richmond. At the approach of Federal armies in mid-1862, the government's archives were readied for removal. As the Wilderness Campaign progressed, Congress authorized Davis to remove the executive department and call Congress to session elsewhere in 1864 and again in 1865. Shortly before the end of the war, the Confederate government evacuated Richmond, planning to relocate farther south. Little came of these plans before Lee's surrender at Appomattox Court House, Virginia on April 9, 1865. Davis and most of his cabinet fled to Danville, Virginia, which served as their headquarters for eight days.
During the four years of its existence, the Confederate States of America asserted its independence and appointed dozens of diplomatic agents abroad. None were ever officially recognized by a foreign government. The United States government regarded the Southern states as being in rebellion or insurrection and so refused any formal recognition of their status.
Even before Fort Sumter, U.S. Secretary of State William H. Seward issued formal instructions to the American minister to Britain, Charles Francis Adams:
[Make] no expressions of harshness or disrespect, or even impatience concerning the seceding States, their agents, or their people, [those States] must always continue to be, equal and honored members of this Federal Union, [their citizens] still are and always must be our kindred and countrymen.
Seward instructed Adams that if the British government seemed inclined to recognize the Confederacy, or even waver in that regard, it was to receive a sharp warning, with a strong hint of war:
[if Britain is] tolerating the application of the so-called seceding States, or wavering about it, [they cannot] remain friends with the United States ... if they determine to recognize [the Confederacy], [Britain] may at the same time prepare to enter into alliance with the enemies of this republic.
The United States government never declared war on those "kindred and countrymen" in the Confederacy but conducted its military efforts beginning with a presidential proclamation issued April 15, 1861. It called for troops to recapture forts and suppress what Lincoln later called an "insurrection and rebellion".
Mid-war parleys between the two sides occurred without formal political recognition, though the laws of war predominantly governed military relationships on both sides of uniformed conflict.
On the part of the Confederacy, immediately following Fort Sumter the Confederate Congress proclaimed that "war exists between the Confederate States and the Government of the United States, and the States and Territories thereof". A state of war was not to formally exist between the Confederacy and those states and territories in the United States allowing slavery, although Confederate Rangers were compensated for destruction they could effect there throughout the war.
Concerning the international status and nationhood of the Confederate States of America, in 1869 the United States Supreme Court in Texas v. White, 74 U.S. (7 Wall.) 700 (1869) ruled Texas' declaration of secession was legally null and void. Jefferson Davis, former President of the Confederacy, and Alexander H. Stephens, its former vice-president, both wrote postwar arguments in favor of secession's legality and the international legitimacy of the Government of the Confederate States of America, most notably Davis' The Rise and Fall of the Confederate Government.
Once war with the United States began, the Confederacy pinned its hopes for survival on military intervention by Great Britain or France. The Confederate government sent James M. Mason to London and John Slidell to Paris. On their way to Europe in 1861, the U.S. Navy intercepted their ship, the Trent, and forcibly took them to Boston, an international episode known as the Trent Affair. The diplomats were eventually released and continued their voyage to Europe. However, their mission was unsuccessful; historians give them low marks for their poor diplomacy. Neither secured diplomatic recognition for the Confederacy, much less military assistance.
The Confederates who had believed that "cotton is king", that is, that Britain had to support the Confederacy to obtain cotton, proved mistaken. The British had stocks to last over a year and had been developing alternative sources of cotton, most notably India and Egypt. Britain had so much cotton that it was exporting some to France. England was not about to go to war with the U.S. to acquire more cotton at the risk of losing the large quantities of food imported from the North.
Aside from the purely economic questions, there was also the clamorous ethical debate. Great Britain took pride in being a leader in ending the transatlantic enslavement of Africans, phasing the practice out within its empire starting in 1833 and deploying the Royal Navy to patrol the waters of the middle passage to prevent additional slave ships from reaching the Western Hemisphere. Confederate diplomats found little support for American slavery, cotton trade or not. A series of slave narratives about American slavery was being published in London. It was in London that the first World Anti-Slavery Convention had been held in 1840; it was followed by regular smaller conferences. A string of eloquent and sometimes well-educated black abolitionist speakers crisscrossed England, Scotland, and Ireland. In addition to exposing the reality of America's chattel slavery—some were fugitive slaves—they rebutted the Confederate position that blacks were "unintellectual, timid, and dependent", and "not equal to the white man...the superior race," as it was put by Confederate Vice-president Alexander H. Stephens in his famous Cornerstone Speech. Frederick Douglass, Henry Highland Garnet, Sarah Parker Remond, her brother Charles Lenox Remond, James W. C. Pennington, Martin Delany, Samuel Ringgold Ward, and William G. Allen all spent years in Britain, where fugitive slaves were safe and, as Allen said, there was an "absence of prejudice against color. Here the colored man feels himself among friends, and not among enemies". One speaker alone, William Wells Brown, gave more than 1,000 lectures on the shame of American chattel slavery.
Throughout the early years of the war, British foreign secretary Lord John Russell, Emperor Napoleon III of France, and, to a lesser extent, British Prime Minister Lord Palmerston, showed interest in recognition of the Confederacy or at least mediation of the war. British Chancellor of the Exchequer William Gladstone, convinced of the necessity of intervention on the Confederate side based on the successful diplomatic intervention in Second Italian War of Independence against Austria, attempted unsuccessfully to convince Lord Palmerston to intervene. By September 1862 the Union victory at the Battle of Antietam, Lincoln's preliminary Emancipation Proclamation and abolitionist opposition in Britain put an end to these possibilities. The cost to Britain of a war with the U.S. would have been high: the immediate loss of American grain-shipments, the end of British exports to the U.S., and the seizure of billions of pounds invested in American securities. War would have meant higher taxes in Britain, another invasion of Canada, and full-scale worldwide attacks on the British merchant fleet. Outright recognition would have meant certain war with the United States. In mid-1862, fears of a race war (as had transpired in the Haitian Revolution of 1791–1804) led to the British considering intervention for humanitarian reasons. Lincoln's Emancipation Proclamation did not lead to interracial violence, let alone a bloodbath, but it did give the friends of the Union strong talking points in the arguments that raged across Britain.
John Slidell, the Confederate States emissary to France, succeeded in negotiating a loan of $15,000,000 from Erlanger and other French capitalists. The money went to buy ironclad warships, and military supplies that came in with blockade runners. The British government did allow the construction of blockade runners in Britain; they were owned and operated by British financiers and ship owners; a few were owned and operated by the Confederacy. The British investors' goal was to get highly profitable cotton.
Several European nations maintained diplomats in place who had been appointed to the U.S., but no country appointed any diplomat to the Confederacy. Those nations recognized the Union and Confederate sides as belligerents. In 1863 the Confederacy expelled European diplomatic missions for advising their resident subjects to refuse to serve in the Confederate army. Both Confederate and Union agents were allowed to work openly in British territories. Some state governments in northern Mexico negotiated local agreements to cover trade on the Texas border. The Confederacy appointed Ambrose Dudley Mann as special agent to the Holy See on September 24, 1863. But the Holy See never released a formal statement supporting or recognizing the Confederacy. In November 1863, Mann met Pope Pius IX in person and received a letter supposedly addressed "to the Illustrious and Honorable Jefferson Davis, President of the Confederate States of America"; Mann had mistranslated the address. In his report to Richmond, Mann claimed a great diplomatic achievement for himself, asserting the letter was "a positive recognition of our Government". The letter was indeed used in propaganda, but Confederate Secretary of State Judah P. Benjamin told Mann it was "a mere inferential recognition, unconnected with political action or the regular establishment of diplomatic relations" and thus did not assign it the weight of formal recognition.
Nevertheless, the Confederacy was seen internationally as a serious attempt at nationhood, and European governments sent military observers, both official and unofficial, to assess whether there had been a de facto establishment of independence. These observers included Arthur Lyon Fremantle of the British Coldstream Guards, who entered the Confederacy via Mexico, Fitzgerald Ross of the Austrian Hussars, and Justus Scheibert of the Prussian Army. European travelers visited and wrote accounts for publication. Importantly in 1862, the Frenchman Charles Girard's Seven months in the rebel states during the North American War testified "this government ... is no longer a trial government ... but really a normal government, the expression of popular will". Fremantle went on to write in his book Three Months in the Southern States that he had:
...not attempted to conceal any of the peculiarities or defects of the Southern people. Many persons will doubtless highly disapprove of some of their customs and habits in the wilder portion of the country; but I think no generous man, whatever may be his political opinions, can do otherwise than admire the courage, energy, and patriotism of the whole population, and the skill of its leaders, in this struggle against great odds. And I am also of opinion that many will agree with me in thinking that a people in which all ranks and both sexes display a unanimity and a heroism which can never have been surpassed in the history of the world, is destined, sooner or later, to become a great and independent nation.
French Emperor Napoleon III assured Confederate diplomat John Slidell that he would make "direct proposition" to Britain for joint recognition. The Emperor made the same assurance to British Members of Parliament John A. Roebuck and John A. Lindsay. Roebuck in turn publicly prepared a bill to submit to Parliament June 30 supporting joint Anglo-French recognition of the Confederacy. "Southerners had a right to be optimistic, or at least hopeful, that their revolution would prevail, or at least endure." Following the double disasters at Vicksburg and Gettysburg in July 1863, the Confederates "suffered a severe loss of confidence in themselves" and withdrew into an interior defensive position. There would be no help from the Europeans.
By December 1864, Davis considered sacrificing slavery in order to enlist recognition and aid from Paris and London; he secretly sent Duncan F. Kenner to Europe with a message that the war was fought solely for "the vindication of our rights to self-government and independence" and that "no sacrifice is too great, save that of honor". The message stated that if the French or British governments made their recognition conditional on anything at all, the Confederacy would consent to such terms. Davis's message could not explicitly acknowledge that slavery was on the bargaining table due to still-strong domestic support for slavery among the wealthy and politically influential. European leaders all saw that the Confederacy was on the verge of total defeat.
The Confederacy's biggest foreign policy successes were with Cuba and Brazil. Militarily this meant little during the war. Brazil represented the "peoples most identical to us in Institutions", in which slavery remained legal until the 1880s. Cuba was a Spanish colony and the Captain–General of Cuba declared in writing that Confederate ships were welcome, and would be protected in Cuban ports. They were also welcome in Brazilian ports; slavery was legal throughout Brazil, and the abolitionist movement was small. After the end of the war, Brazil was the primary destination of those Southerners who wanted to continue living in a slave society, where, as one immigrant remarked, Confederado slaves were cheap. Historians speculate that if the Confederacy had achieved independence, it probably would have tried to acquire Cuba as a base of expansion.
Most soldiers who joined Confederate national or state military units joined voluntarily. Perman (2010) says historians are of two minds on why millions of soldiers seemed so eager to fight, suffer and die over four years:
Some historians emphasize that Civil War soldiers were driven by political ideology, holding firm beliefs about the importance of liberty, Union, or state rights, or about the need to protect or to destroy slavery. Others point to less overtly political reasons to fight, such as the defense of one's home and family, or the honor and brotherhood to be preserved when fighting alongside other men. Most historians agree that, no matter what he thought about when he went into the war, the experience of combat affected him profoundly and sometimes affected his reasons for continuing to fight.
Civil War historian E. Merton Coulter wrote that for those who would secure its independence, "The Confederacy was unfortunate in its failure to work out a general strategy for the whole war". Aggressive strategy called for offensive force concentration. Defensive strategy sought dispersal to meet demands of locally minded governors. The controlling philosophy evolved into a combination "dispersal with a defensive concentration around Richmond". The Davis administration considered the war purely defensive, a "simple demand that the people of the United States would cease to war upon us". Historian James M. McPherson is a critic of Lee's offensive strategy: "Lee pursued a faulty military strategy that ensured Confederate defeat".
As the Confederate government lost control of territory in campaign after campaign, it was said that "the vast size of the Confederacy would make its conquest impossible". The enemy would be struck down by the same elements which so often debilitated or destroyed visitors and transplants in the South. Heat exhaustion, sunstroke, endemic diseases such as malaria and typhoid would match the destructive effectiveness of the Moscow winter on the invading armies of Napoleon.
Early in the war both sides believed that one great battle would decide the conflict; the Confederates won a surprise victory at the First Battle of Bull Run, also known as First Manassas (the name used by Confederate forces). It drove the Confederate people "insane with joy"; the public demanded a forward movement to capture Washington, relocate the Confederate capital there, and admit Maryland to the Confederacy. A council of war by the victorious Confederate generals decided not to advance against larger numbers of fresh Federal troops in defensive positions. Davis did not countermand it. Following the Confederate incursion into Maryland halted at the Battle of Antietam in October 1862, generals proposed concentrating forces from state commands to re-invade the north. Nothing came of it. Again in mid-1863 at his incursion into Pennsylvania, Lee requested of Davis that Beauregard simultaneously attack Washington with troops taken from the Carolinas. But the troops there remained in place during the Gettysburg Campaign.
The eleven states of the Confederacy were outnumbered by the North about four-to-one in military manpower. It was overmatched far more in military equipment, industrial facilities, railroads for transport, and wagons supplying the front.
Confederates slowed the Yankee invaders, at heavy cost to the Southern infrastructure. The Confederates burned bridges, laid land mines in the roads, and made harbors inlets and inland waterways unusable with sunken mines (called "torpedoes" at the time). Coulter reports:
Rangers in twenty to fifty-man units were awarded 50% valuation for property destroyed behind Union lines, regardless of location or loyalty. As Federals occupied the South, objections by loyal Confederate concerning Ranger horse-stealing and indiscriminate scorched earth tactics behind Union lines led to Congress abolishing the Ranger service two years later.
The Confederacy relied on external sources for war materials. The first came from trade with the enemy. "Vast amounts of war supplies" came through Kentucky, and thereafter, western armies were "to a very considerable extent" provisioned with illicit trade via Federal agents and northern private traders. But that trade was interrupted in the first year of war by Admiral Porter's river gunboats as they gained dominance along navigable rivers north–south and east–west. Overseas blockade running then came to be of "outstanding importance". On April 17, President Davis called on privateer raiders, the "militia of the sea", to wage war on U.S. seaborne commerce. Despite noteworthy effort, over the course of the war the Confederacy was found unable to match the Union in ships and seamanship, materials and marine construction.
An inescapable obstacle to success in the warfare of mass armies was the Confederacy's lack of manpower, and sufficient numbers of disciplined, equipped troops in the field at the point of contact with the enemy. During the winter of 1862–63, Lee observed that none of his famous victories had resulted in the destruction of the opposing army. He lacked reserve troops to exploit an advantage on the battlefield as Napoleon had done. Lee explained, "More than once have most promising opportunities been lost for want of men to take advantage of them, and victory itself had been made to put on the appearance of defeat, because our diminished and exhausted troops have been unable to renew a successful struggle against fresh numbers of the enemy."
The military armed forces of the Confederacy comprised three branches: Army, Navy and Marine Corps.
The Confederate military leadership included many veterans from the United States Army and United States Navy who had resigned their Federal commissions and were appointed to senior positions. Many had served in the Mexican–American War (including Robert E. Lee and Jefferson Davis), but some such as Leonidas Polk (who graduated from West Point but did not serve in the Army) had little or no experience.
The Confederate officer corps consisted of men from both slave-owning and non-slave-owning families. The Confederacy appointed junior and field grade officers by election from the enlisted ranks. Although no Army service academy was established for the Confederacy, some colleges (such as The Citadel and Virginia Military Institute) maintained cadet corps that trained Confederate military leadership. A naval academy was established at Drewry's Bluff, Virginia in 1863, but no midshipmen graduated before the Confederacy's end.
Most soldiers were white males aged between 16 and 28. The median year of birth was 1838, so half the soldiers were 23 or older by 1861. In early 1862, the Confederate Army was allowed to disintegrate for two months following expiration of short-term enlistments. Most of those in uniform would not re-enlist following their one-year commitment, so on April 16, 1862, the Confederate Congress enacted the first mass conscription on the North American continent. (The U.S. Congress followed a year later on March 3, 1863, with the Enrollment Act.) Rather than a universal draft, the initial program was a selective service with physical, religious, professional and industrial exemptions. These were narrowed as the war progressed. Initially substitutes were permitted, but by December 1863 these were disallowed. In September 1862 the age limit was increased from 35 to 45 and by February 1864, all men under 18 and over 45 were conscripted to form a reserve for state defense inside state borders. By March 1864, the Superintendent of Conscription reported that all across the Confederacy, every officer in constituted authority, man and woman, "engaged in opposing the enrolling officer in the execution of his duties". Although challenged in the state courts, the Confederate State Supreme Courts routinely rejected legal challenges to conscription.
Many thousands of slaves served as personal servants to their owner, or were hired as laborers, cooks, and pioneers. Some freed blacks and men of color served in local state militia units of the Confederacy, primarily in Louisiana and South Carolina, but their officers deployed them for "local defense, not combat". Depleted by casualties and desertions, the military suffered chronic manpower shortages. In early 1865, the Confederate Congress, influenced by the public support by General Lee, approved the recruitment of black infantry units. Contrary to Lee's and Davis's recommendations, the Congress refused "to guarantee the freedom of black volunteers". No more than two hundred black combat troops were ever raised.
The immediate onset of war meant that it was fought by the "Provisional" or "Volunteer Army". State governors resisted concentrating a national effort. Several wanted a strong state army for self-defense. Others feared large "Provisional" armies answering only to Davis. When filling the Confederate government's call for 100,000 men, another 200,000 were turned away by accepting only those enlisted "for the duration" or twelve-month volunteers who brought their own arms or horses.
It was important to raise troops; it was just as important to provide capable officers to command them. With few exceptions the Confederacy secured excellent general officers. Efficiency in the lower officers was "greater than could have been reasonably expected". As with the Federals, political appointees could be indifferent. Otherwise, the officer corps was governor-appointed or elected by unit enlisted. Promotion to fill vacancies was made internally regardless of merit, even if better officers were immediately available.
Anticipating the need for more "duration" men, in January 1862 Congress provided for company level recruiters to return home for two months, but their efforts met little success on the heels of Confederate battlefield defeats in February. Congress allowed for Davis to require numbers of recruits from each governor to supply the volunteer shortfall. States responded by passing their own draft laws.
The veteran Confederate army of early 1862 was mostly twelve-month volunteers with terms about to expire. Enlisted reorganization elections disintegrated the army for two months. Officers pleaded with the ranks to re-enlist, but a majority did not. Those remaining elected majors and colonels whose performance led to officer review boards in October. The boards caused a "rapid and widespread" thinning out of 1,700 incompetent officers. Troops thereafter would elect only second lieutenants.
In early 1862, the popular press suggested the Confederacy required a million men under arms. But veteran soldiers were not re-enlisting, and earlier secessionist volunteers did not reappear to serve in war. One Macon, Georgia, newspaper asked how two million brave fighting men of the South were about to be overcome by four million northerners who were said to be cowards.
The Confederacy passed the first American law of national conscription on April 16, 1862. The white males of the Confederate States from 18 to 35 were declared members of the Confederate army for three years, and all men then enlisted were extended to a three-year term. They would serve only in units and under officers of their state. Those under 18 and over 35 could substitute for conscripts, in September those from 35 to 45 became conscripts. The cry of "rich man's war and a poor man's fight" led Congress to abolish the substitute system altogether in December 1863. All principals benefiting earlier were made eligible for service. By February 1864, the age bracket was made 17 to 50, those under eighteen and over forty-five to be limited to in-state duty.
Confederate conscription was not universal; it was a selective service. The First Conscription Act of April 1862 exempted occupations related to transportation, communication, industry, ministers, teaching and physical fitness. The Second Conscription Act of October 1862 expanded exemptions in industry, agriculture and conscientious objection. Exemption fraud proliferated in medical examinations, army furloughs, churches, schools, apothecaries and newspapers.
Rich men's sons were appointed to the socially outcast "overseer" occupation, but the measure was received in the country with "universal odium". The legislative vehicle was the controversial Twenty Negro Law that specifically exempted one white overseer or owner for every plantation with at least 20 slaves. Backpedaling six months later, Congress provided overseers under 45 could be exempted only if they held the occupation before the first Conscription Act. The number of officials under state exemptions appointed by state Governor patronage expanded significantly. By law, substitutes could not be subject to conscription, but instead of adding to Confederate manpower, unit officers in the field reported that over-50 and under-17-year-old substitutes made up to 90% of the desertions.
The Conscription Act of February 1864 "radically changed the whole system" of selection. It abolished industrial exemptions, placing detail authority in President Davis. As the shame of conscription was greater than a felony conviction, the system brought in "about as many volunteers as it did conscripts." Many men in otherwise "bombproof" positions were enlisted in one way or another, nearly 160,000 additional volunteers and conscripts in uniform. Still there was shirking. To administer the draft, a Bureau of Conscription was set up to use state officers, as state Governors would allow. It had a checkered career of "contention, opposition and futility". Armies appointed alternative military "recruiters" to bring in the out-of-uniform 17–50-year-old conscripts and deserters. Nearly 3,000 officers were tasked with the job. By late 1864, Lee was calling for more troops. "Our ranks are constantly diminishing by battle and disease, and few recruits are received; the consequences are inevitable." By March 1865 conscription was to be administered by generals of the state reserves calling out men over 45 and under 18 years old. All exemptions were abolished. These regiments were assigned to recruit conscripts ages 17–50, recover deserters, and repel enemy cavalry raids. The service retained men who had lost but one arm or a leg in home guards. Ultimately, conscription was a failure, and its main value was in goading men to volunteer.
The survival of the Confederacy depended on a strong base of civilians and soldiers devoted to victory. The soldiers performed well, though increasing numbers deserted in the last year of fighting, and the Confederacy never succeeded in replacing casualties as the Union could. The civilians, although enthusiastic in 1861–62, seem to have lost faith in the future of the Confederacy by 1864, and instead looked to protect their homes and communities. As Rable explains, "This contraction of civic vision was more than a crabbed libertarianism; it represented an increasingly widespread disillusionment with the Confederate experiment."
The American Civil War broke out in April 1861 with a Confederate victory at the Battle of Fort Sumter in Charleston.
In January, President James Buchanan had attempted to resupply the garrison with the steamship, Star of the West, but Confederate artillery drove it away. In March, President Lincoln notified South Carolina Governor Pickens that without Confederate resistance to the resupply there would be no military reinforcement without further notice, but Lincoln prepared to force resupply if it were not allowed. Confederate President Davis, in cabinet, decided to seize Fort Sumter before the relief fleet arrived, and on April 12, 1861, General Beauregard forced its surrender.
Following Sumter, Lincoln directed states to provide 75,000 troops for three months to recapture the Charleston Harbor forts and all other federal property. This emboldened secessionists in Virginia, Arkansas, Tennessee and North Carolina to secede rather than provide troops to march into neighboring Southern states. In May, Federal troops crossed into Confederate territory along the entire border from the Chesapeake Bay to New Mexico. The first battles were Confederate victories at Big Bethel (Bethel Church, Virginia), First Bull Run (First Manassas) in Virginia July and in August, Wilson's Creek (Oak Hills) in Missouri. At all three, Confederate forces could not follow up their victory due to inadequate supply and shortages of fresh troops to exploit their successes. Following each battle, Federals maintained a military presence and occupied Washington, DC; Fort Monroe, Virginia; and Springfield, Missouri. Both North and South began training up armies for major fighting the next year. Union General George B. McClellan's forces gained possession of much of northwestern Virginia in mid-1861, concentrating on towns and roads; the interior was too large to control and became the center of guerrilla activity. General Robert E. Lee was defeated at Cheat Mountain in September and no serious Confederate advance in western Virginia occurred until the next year.
Meanwhile, the Union Navy seized control of much of the Confederate coastline from Virginia to South Carolina. It took over plantations and the abandoned slaves. Federals there began a war-long policy of burning grain supplies up rivers into the interior wherever they could not occupy. The Union Navy began a blockade of the major southern ports and prepared an invasion of Louisiana to capture New Orleans in early 1862.
The victories of 1861 were followed by a series of defeats east and west in early 1862. To restore the Union by military force, the Federal strategy was to (1) secure the Mississippi River, (2) seize or close Confederate ports, and (3) march on Richmond. To secure independence, the Confederate intent was to (1) repel the invader on all fronts, costing him blood and treasure, and (2) carry the war into the North by two offensives in time to affect the mid-term elections.
Much of northwestern Virginia was under Federal control. In February and March, most of Missouri and Kentucky were Union "occupied, consolidated, and used as staging areas for advances further South". Following the repulse of a Confederate counterattack at the Battle of Shiloh, Tennessee, permanent Federal occupation expanded west, south and east. Confederate forces repositioned south along the Mississippi River to Memphis, Tennessee, where at the naval Battle of Memphis, its River Defense Fleet was sunk. Confederates withdrew from northern Mississippi and northern Alabama. New Orleans was captured April 29 by a combined Army-Navy force under U.S. Admiral David Farragut, and the Confederacy lost control of the mouth of the Mississippi River. It had to concede extensive agricultural resources that had supported the Union's sea-supplied logistics base.
Although Confederates had suffered major reverses everywhere, as of the end of April the Confederacy still controlled territory holding 72% of its population. Federal forces disrupted Missouri and Arkansas; they had broken through in western Virginia, Kentucky, Tennessee and Louisiana. Along the Confederacy's shores, Union forces had closed ports and made garrisoned lodgments on every coastal Confederate state except Alabama and Texas. Although scholars sometimes assess the Union blockade as ineffectual under international law until the last few months of the war, from the first months it disrupted Confederate privateers, making it "almost impossible to bring their prizes into Confederate ports". British firms developed small fleets of blockade running companies, such as John Fraser and Company and S. Isaac, Campbell & Company while the Ordnance Department secured its own blockade runners for dedicated munitions cargoes.
During the Civil War fleets of armored warships were deployed for the first time in sustained blockades at sea. After some success against the Union blockade, in March the ironclad CSS Virginia was forced into port and burned by Confederates at their retreat. Despite several attempts mounted from their port cities, CSA naval forces were unable to break the Union blockade. Attempts were made by Commodore Josiah Tattnall III's ironclads from Savannah in 1862 with the CSS Atlanta. Secretary of the Navy Stephen Mallory placed his hopes in a European-built ironclad fleet, but they were never realized. On the other hand, four new English-built commerce raiders served the Confederacy, and several fast blockade runners were sold in Confederate ports. They were converted into commerce-raiding cruisers, and manned by their British crews.
In the east, Union forces could not close on Richmond. General McClellan landed his army on the Lower Peninsula of Virginia. Lee subsequently ended that threat from the east, then Union General John Pope attacked overland from the north only to be repulsed at Second Bull Run (Second Manassas). Lee's strike north was turned back at Antietam MD, then Union Major General Ambrose Burnside's offensive was disastrously ended at Fredericksburg VA in December. Both armies then turned to winter quarters to recruit and train for the coming spring.
In an attempt to seize the initiative, reprove, protect farms in mid-growing season and influence U.S. Congressional elections, two major Confederate incursions into Union territory had been launched in August and September 1862. Both Braxton Bragg's invasion of Kentucky and Lee's invasion of Maryland were decisively repulsed, leaving Confederates in control of but 63% of its population. Civil War scholar Allan Nevins argues that 1862 was the strategic high-water mark of the Confederacy. The failures of the two invasions were attributed to the same irrecoverable shortcomings: lack of manpower at the front, lack of supplies including serviceable shoes, and exhaustion after long marches without adequate food. Also in September Confederate General William W. Loring pushed Federal forces from Charleston, Virginia, and the Kanawha Valley in western Virginia, but lacking reinforcements Loring abandoned his position and by November the region was back in Federal control.
The failed Middle Tennessee campaign was ended January 2, 1863, at the inconclusive Battle of Stones River (Murfreesboro), both sides losing the largest percentage of casualties suffered during the war. It was followed by another strategic withdrawal by Confederate forces. The Confederacy won a significant victory April 1863, repulsing the Federal advance on Richmond at Chancellorsville, but the Union consolidated positions along the Virginia coast and the Chesapeake Bay.
Without an effective answer to Federal gunboats, river transport and supply, the Confederacy lost the Mississippi River following the capture of Vicksburg, Mississippi, and Port Hudson in July, ending Southern access to the trans-Mississippi West. July brought short-lived counters, Morgan's Raid into Ohio and the New York City draft riots. Robert E. Lee's strike into Pennsylvania was repulsed at Gettysburg, Pennsylvania despite Pickett's famous charge and other acts of valor. Southern newspapers assessed the campaign as "The Confederates did not gain a victory, neither did the enemy."
September and November left Confederates yielding Chattanooga, Tennessee, the gateway to the lower south. For the remainder of the war fighting was restricted inside the South, resulting in a slow but continuous loss of territory. In early 1864, the Confederacy still controlled 53% of its population, but it withdrew further to reestablish defensive positions. Union offensives continued with Sherman's March to the Sea to take Savannah and Grant's Wilderness Campaign to encircle Richmond and besiege Lee's army at Petersburg.
In April 1863, the C.S. Congress authorized a uniformed Volunteer Navy, many of whom were British. The Confederacy had altogether eighteen commerce-destroying cruisers, which seriously disrupted Federal commerce at sea and increased shipping insurance rates 900%. Commodore Tattnall again unsuccessfully attempted to break the Union blockade on the Savannah River in Georgia with an ironclad in 1863. Beginning in April 1864 the ironclad CSS Albemarle engaged Union gunboats for six months on the Roanoke River in North Carolina. The Federals closed Mobile Bay by sea-based amphibious assault in August, ending Gulf coast trade east of the Mississippi River. In December, the Battle of Nashville ended Confederate operations in the western theater.
Large numbers of families relocated to safer places, usually remote rural areas, bringing along household slaves if they had any. Mary Massey argues these elite exiles introduced an element of defeatism into the southern outlook.
The first three months of 1865 saw the Federal Carolinas Campaign, devastating a wide swath of the remaining Confederate heartland. The "breadbasket of the Confederacy" in the Great Valley of Virginia was occupied by Philip Sheridan. The Union Blockade captured Fort Fisher in North Carolina, and Sherman finally took Charleston, South Carolina, by land attack.
The Confederacy controlled no ports, harbors or navigable rivers. Railroads were captured or had ceased operating. Its major food-producing regions had been war-ravaged or occupied. Its administration survived in only three pockets of territory holding only one-third of its population. Its armies were defeated or disbanding. At the February 1865 Hampton Roads Conference with Lincoln, senior Confederate officials rejected his invitation to restore the Union with compensation for emancipated slaves. The three pockets of unoccupied Confederacy were southern Virginia—North Carolina, central Alabama—Florida, and Texas, the latter two areas less from any notion of resistance than from the disinterest of Federal forces to occupy them. The Davis policy was independence or nothing, while Lee's army was wracked by disease and desertion, barely holding the trenches defending Jefferson Davis' capital.
The Confederacy's last remaining blockade-running port, Wilmington, North Carolina, was lost. When the Union broke through Lee's lines at Petersburg, Richmond fell immediately. Lee surrendered a remnant of 50,000 from the Army of Northern Virginia at Appomattox Court House, Virginia, on April 9, 1865. "The Surrender" marked the end of the Confederacy. The CSS Stonewall sailed from Europe to break the Union blockade in March; on making Havana, Cuba, it surrendered. Some high officials escaped to Europe, but President Davis was captured May 10; all remaining Confederate land forces surrendered by June 1865. The U.S. Army took control of the Confederate areas without post-surrender insurgency or guerrilla warfare against them, but peace was subsequently marred by a great deal of local violence, feuding and revenge killings. The last confederate military unit, the commerce raider CSS Shenandoah, surrendered on November 6, 1865, in Liverpool.
Historian Gary Gallagher concluded that the Confederacy capitulated in early 1865 because northern armies crushed "organized southern military resistance". The Confederacy's population, soldier and civilian, had suffered material hardship and social disruption. They had expended and extracted a profusion of blood and treasure until collapse; "the end had come". Jefferson Davis' assessment in 1890 determined, "With the capture of the capital, the dispersion of the civil authorities, the surrender of the armies in the field, and the arrest of the President, the Confederate States of America disappeared ... their history henceforth became a part of the history of the United States."
When the war ended over 14,000 Confederates petitioned President Johnson for a pardon; he was generous in giving them out. He issued a general amnesty to all Confederate participants in the "late Civil War" in 1868. Congress passed additional Amnesty Acts in May 1866 with restrictions on office holding, and the Amnesty Act in May 1872 lifting those restrictions. There was a great deal of discussion in 1865 about bringing treason trials, especially against Jefferson Davis. There was no consensus in President Johnson's cabinet, and no one was charged with treason. An acquittal of Davis would have been humiliating for the government.
Davis was indicted for treason but never tried; he was released from prison on bail in May 1867. The amnesty of December 25, 1868, by President Johnson eliminated any possibility of Jefferson Davis (or anyone else associated with the Confederacy) standing trial for treason.
Henry Wirz, the commandant of a notorious prisoner-of-war camp near Andersonville, Georgia, was tried and convicted by a military court, and executed on November 10, 1865. The charges against him involved conspiracy and cruelty, not treason.
The U.S. government began a decade-long process known as Reconstruction which attempted to resolve the political and constitutional issues of the Civil War. The priorities were: to guarantee that Confederate nationalism and slavery were ended, to ratify and enforce the Thirteenth Amendment which outlawed slavery; the Fourteenth which guaranteed dual U.S. and state citizenship to all native-born residents, regardless of race; and the Fifteenth, which made it illegal to deny the right to vote because of race.
By 1877, the Compromise of 1877 ended Reconstruction in the former Confederate states. Federal troops were withdrawn from the South, where conservative white Democrats had already regained political control of state governments, often through extreme violence and fraud to suppress black voting. The prewar South had many rich areas; the war left the entire region economically devastated by military action, ruined infrastructure, and exhausted resources. Still dependent on an agricultural economy and resisting investment in infrastructure, it remained dominated by the planter elite into the next century. Confederate veterans had been temporarily disenfranchised by Reconstruction policy, and Democrat-dominated legislatures passed new constitutions and amendments to now exclude most blacks and many poor whites. This exclusion and a weakened Republican Party remained the norm until the Voting Rights Act of 1965. The Solid South of the early 20th century did not achieve national levels of prosperity until long after World War II.
In Texas v. White, 74 U.S. 700 (1869) the United States Supreme Court ruled—by a 5–3 majority—that Texas had remained a state ever since it first joined the Union, despite claims that it joined the Confederate States of America. In this case, the court held that the Constitution did not permit a state to unilaterally secede from the United States. Further, that the ordinances of secession, and all the acts of the legislatures within seceding states intended to give effect to such ordinances, were "absolutely null", under the Constitution. This case settled the law that applied to all questions regarding state legislation during the war. Furthermore, it decided one of the "central constitutional questions" of the Civil War: The Union is perpetual and indestructible, as a matter of constitutional law. In declaring that no state could leave the Union, "except through revolution or through consent of the States", it was "explicitly repudiating the position of the Confederate states that the United States was a voluntary compact between sovereign states".
Historian Frank Lawrence Owsley argued that the Confederacy "died of states' rights". The central government was denied requisitioned soldiers and money by governors and state legislatures because they feared that Richmond would encroach on the rights of the states. Georgia's governor Joseph Brown warned of a secret conspiracy by Jefferson Davis to destroy states' rights and individual liberty. The first conscription act in North America, authorizing Davis to draft soldiers, was said to be the "essence of military despotism".
Vice President Alexander H. Stephens feared losing the very form of republican government. Allowing President Davis to threaten "arbitrary arrests" to draft hundreds of governor-appointed "bomb-proof" bureaucrats conferred "more power than the English Parliament had ever bestowed on the king. History proved the dangers of such unchecked authority." The abolishment of draft exemptions for newspaper editors was interpreted as an attempt by the Confederate government to muzzle presses, such as the Raleigh NC Standard, to control elections and to suppress the peace meetings there. As Rable concludes, "For Stephens, the essence of patriotism, the heart of the Confederate cause, rested on an unyielding commitment to traditional rights" without considerations of military necessity, pragmatism or compromise.
In 1863, Governor Pendleton Murrah of Texas determined that state troops were required for defense against Plains Indians and Union forces that might attack from Kansas. He refused to send his soldiers to the East. Governor Zebulon Vance of North Carolina showed intense opposition to conscription, limiting recruitment success. Vance's faith in states' rights drove him into repeated, stubborn opposition to the Davis administration.
Despite political differences within the Confederacy, no national political parties were formed because they were seen as illegitimate. "Anti-partyism became an article of political faith." Without a system of political parties building alternate sets of national leaders, electoral protests tended to be narrowly state-based, "negative, carping and petty". The 1863 mid-term elections became mere expressions of futile and frustrated dissatisfaction. According to historian David M. Potter, the lack of a functioning two-party system caused "real and direct damage" to the Confederate war effort since it prevented the formulation of any effective alternatives to the conduct of the war by the Davis administration.
The enemies of President Davis proposed that the Confederacy "died of Davis". He was unfavorably compared to George Washington by critics such as Edward Alfred Pollard, editor of the most influential newspaper in the Confederacy, the Richmond (Virginia) Examiner. E. Merton Coulter summarizes, "The American Revolution had its Washington; the Southern Revolution had its Davis ... one succeeded and the other failed." Beyond the early honeymoon period, Davis was never popular. He unwittingly caused much internal dissension from early on. His ill health and temporary bouts of blindness disabled him for days at a time.
Coulter, viewed by today's historians as a Confederate apologist, says Davis was heroic and his will was indomitable. But his "tenacity, determination, and will power" stirred up lasting opposition from enemies that Davis could not shake. He failed to overcome "petty leaders of the states" who made the term "Confederacy" into a label for tyranny and oppression, preventing the "Stars and Bars" from becoming a symbol of larger patriotic service and sacrifice. Instead of campaigning to develop nationalism and gain support for his administration, he rarely courted public opinion, assuming an aloofness, "almost like an Adams".
Escott argues that Davis was unable to mobilize Confederate nationalism in support of his government effectively, and especially failed to appeal to the small farmers who comprised the bulk of the population. In addition to the problems caused by states' rights, Escott also emphasizes that the widespread opposition to any strong central government combined with the vast difference in wealth between the slave-owning class and the small farmers created insolvable dilemmas when the Confederate survival presupposed a strong central government backed by a united populace. The prewar claim that white solidarity was necessary to provide a unified Southern voice in Washington no longer held. Davis failed to build a network of supporters who would speak up when he came under criticism, and he repeatedly alienated governors and other state-based leaders by demanding centralized control of the war effort.
According to Coulter, Davis was not an efficient administrator as he attended to too many details, protected his friends after their failures were obvious, and spent too much time on military affairs versus his civic responsibilities. Coulter concludes he was not the ideal leader for the Southern Revolution, but he showed "fewer weaknesses than any other" contemporary character available for the role.
Robert E. Lee's assessment of Davis as president was, "I knew of none that could have done as well."
The Southern leaders met in Montgomery, Alabama, to write their constitution. Much of the Confederate States Constitution replicated the United States Constitution verbatim, but it contained several explicit protections of the institution of slavery including provisions for the recognition and protection of slavery in any territory of the Confederacy. It maintained the ban on international slave-trading, though it made the ban's application explicit to "Negroes of the African race" in contrast to the U.S. Constitution's reference to "such Persons as any of the States now existing shall think proper to admit". It protected the existing internal trade of slaves among slaveholding states.
In certain areas, the Confederate Constitution gave greater powers to the states (or curtailed the powers of the central government more) than the U.S. Constitution of the time did, but in other areas, the states lost rights they had under the U.S. Constitution. Although the Confederate Constitution, like the U.S. Constitution, contained a commerce clause, the Confederate version prohibited the central government from using revenues collected in one state for funding internal improvements in another state. The Confederate Constitution's equivalent to the U.S. Constitution's general welfare clause prohibited protective tariffs (but allowed tariffs for providing domestic revenue), and spoke of "carry[ing] on the Government of the Confederate States" rather than providing for the "general welfare". State legislatures had the power to impeach officials of the Confederate government in some cases. On the other hand, the Confederate Constitution contained a Necessary and Proper Clause and a Supremacy Clause that essentially duplicated the respective clauses of the U.S. Constitution. The Confederate Constitution also incorporated each of the 12 amendments to the U.S. Constitution that had been ratified up to that point.
The Confederate Constitution did not specifically include a provision allowing states to secede; the Preamble spoke of each state "acting in its sovereign and independent character" but also of the formation of a "permanent federal government". During the debates on drafting the Confederate Constitution, one proposal would have allowed states to secede from the Confederacy. The proposal was tabled with only the South Carolina delegates voting in favor of considering the motion. The Confederate Constitution also explicitly denied States the power to bar slaveholders from other parts of the Confederacy from bringing their slaves into any state of the Confederacy or to interfere with the property rights of slave owners traveling between different parts of the Confederacy. In contrast with the secular language of the United States Constitution, the Confederate Constitution overtly asked God's blessing ("... invoking the favor and guidance of Almighty God ...").
Some historians have referred to the Confederacy as a form of Herrenvolk democracy.
The Montgomery Convention to establish the Confederacy and its executive met on February 4, 1861. Each state as a sovereignty had one vote, with the same delegation size as it held in the U.S. Congress, and generally 41 to 50 members attended. Offices were "provisional", limited to a term not to exceed one year. One name was placed in nomination for president, one for vice president. Both were elected unanimously, 6–0.
Jefferson Davis was elected provisional president. His U.S. Senate resignation speech greatly impressed with its clear rationale for secession and his pleading for a peaceful departure from the Union to independence. Although he had made it known that he wanted to be commander-in-chief of the Confederate armies, when elected, he assumed the office of Provisional President. Three candidates for provisional Vice President were under consideration the night before the February 9 election. All were from Georgia, and the various delegations meeting in different places determined two would not do, so Alexander H. Stephens was elected unanimously provisional Vice President, though with some privately held reservations. Stephens was inaugurated February 11, Davis February 18.
Davis and Stephens were elected president and vice president, unopposed on November 6, 1861. They were inaugurated on February 22, 1862.
Coulter stated, "No president of the U.S. ever had a more difficult task." Washington was inaugurated in peacetime. Lincoln inherited an established government of long standing. The creation of the Confederacy was accomplished by men who saw themselves as fundamentally conservative. Although they referred to their "Revolution", it was in their eyes more a counter-revolution against changes away from their understanding of U.S. founding documents. In Davis' inauguration speech, he explained the Confederacy was not a French-like revolution, but a transfer of rule. The Montgomery Convention had assumed all the laws of the United States until superseded by the Confederate Congress.
The Permanent Constitution provided for a President of the Confederate States of America, elected to serve a six-year term but without the possibility of re-election. Unlike the United States Constitution, the Confederate Constitution gave the president the ability to subject a bill to a line item veto, a power also held by some state governors.
The Confederate Congress could overturn either the general or the line item vetoes with the same two-thirds votes required in the U.S. Congress. In addition, appropriations not specifically requested by the executive branch required passage by a two-thirds vote in both houses of Congress. The only person to serve as president was Jefferson Davis, as the Confederacy was defeated before the completion of his term.
The only two "formal, national, functioning, civilian administrative bodies" in the Civil War South were the Jefferson Davis administration and the Confederate Congresses. The Confederacy was begun by the Provisional Congress in Convention at Montgomery, Alabama on February 28, 1861. The Provisional Confederate Congress was a unicameral assembly; each state received one vote.
The Permanent Confederate Congress was elected and began its first session February 18, 1862. The Permanent Congress for the Confederacy followed the United States forms with a bicameral legislature. The Senate had two per state, twenty-six Senators. The House numbered 106 representatives apportioned by free and slave populations within each state. Two Congresses sat in six sessions until March 18, 1865.
The political influences of the civilian, soldier vote and appointed representatives reflected divisions of political geography of a diverse South. These in turn changed over time relative to Union occupation and disruption, the war impact on the local economy, and the course of the war. Without political parties, key candidate identification related to adopting secession before or after Lincoln's call for volunteers to retake Federal property. Previous party affiliation played a part in voter selection, predominantly secessionist Democrat or unionist Whig.
The absence of political parties made individual roll call voting all the more important, as the Confederate "freedom of roll-call voting [was] unprecedented in American legislative history." Key issues throughout the life of the Confederacy related to (1) suspension of habeas corpus, (2) military concerns such as control of state militia, conscription and exemption, (3) economic and fiscal policy including impressment of slaves, goods and scorched earth, and (4) support of the Jefferson Davis administration in its foreign affairs and negotiating peace.
The Confederate Constitution outlined a judicial branch of the government, but the ongoing war and resistance from states-rights advocates, particularly on the question of whether it would have appellate jurisdiction over the state courts, prevented the creation or seating of the "Supreme Court of the Confederate States". Thus, the state courts generally continued to operate as they had done, simply recognizing the Confederate States as the national government.
Confederate district courts were authorized by Article III, Section 1, of the Confederate Constitution, and President Davis appointed judges within the individual states of the Confederate States of America. In many cases, the same US Federal District Judges were appointed as Confederate States District Judges. Confederate district courts began reopening in early 1861, handling many of the same type cases as had been done before. Prize cases, in which Union ships were captured by the Confederate Navy or raiders and sold through court proceedings, were heard until the blockade of southern ports made this impossible. After a Sequestration Act was passed by the Confederate Congress, the Confederate district courts heard many cases in which enemy aliens (typically Northern absentee landlords owning property in the South) had their property sequestered (seized) by Confederate Receivers.
When the matter came before the Confederate court, the property owner could not appear because he was unable to travel across the front lines between Union and Confederate forces. Thus, the District Attorney won the case by default, the property was typically sold, and the money used to further the Southern war effort. Eventually, because there was no Confederate Supreme Court, sharp attorneys like South Carolina's Edward McCrady began filing appeals. This prevented their clients' property from being sold until a supreme court could be constituted to hear the appeal, which never occurred. Where Federal troops gained control over parts of the Confederacy and re-established civilian government, US district courts sometimes resumed jurisdiction.
Supreme Court – not established.
District Courts – judges
When the Confederacy was formed and its seceding states broke from the Union, it was at once confronted with the arduous task of providing its citizens with a mail delivery system, and, amid the American Civil War, the newly formed Confederacy created and established the Confederate Post Office. One of the first undertakings in establishing the Post Office was the appointment of John H. Reagan to the position of Postmaster General, by Jefferson Davis in 1861. This made him the first Postmaster General of the Confederate Post Office, and a member of Davis's presidential cabinet. Writing in 1906, historian Walter Flavius McCaleb praised Reagan's "energy and intelligence... in a degree scarcely matched by any of his associates".
When the war began, the US Post Office briefly delivered mail from the secessionist states. Mail that was postmarked after the date of a state's admission into the Confederacy through May 31, 1861, and bearing US postage was still delivered. After this time, private express companies still managed to carry some of the mail across enemy lines. Later, mail that crossed lines had to be sent by 'Flag of Truce' and was allowed to pass at only two specific points. Mail sent from the Confederacy to the U.S. was received, opened and inspected at Fortress Monroe on the Virginia coast before being passed on into the U.S. mail stream. Mail sent from the North to the South passed at City Point, also in Virginia, where it was also inspected before being sent on.
With the chaos of the war, a working postal system was more important than ever for the Confederacy. The Civil War had divided family members and friends and consequently letter writing increased dramatically across the entire divided nation, especially to and from the men who were away serving in an army. Mail delivery was also important for the Confederacy for a myriad of business and military reasons. Because of the Union blockade, basic supplies were always in demand and so getting mailed correspondence out of the country to suppliers was imperative to the successful operation of the Confederacy. Volumes of material have been written about the Blockade runners who evaded Union ships on blockade patrol, usually at night, and who moved cargo and mail in and out of the Confederate States throughout the course of the war. Of particular interest to students and historians of the American Civil War is Prisoner of War mail and Blockade mail as these items were often involved with a variety of military and other war time activities. The postal history of the Confederacy along with surviving Confederate mail has helped historians document the various people, places and events that were involved in the American Civil War as it unfolded.
The Confederacy actively used the army to arrest people suspected of loyalty to the United States. Historian Mark Neely found 4,108 names of men arrested and estimated a much larger total. The Confederacy arrested pro-Union civilians in the South at about the same rate as the Union arrested pro-Confederate civilians in the North. Neely argues:
The Confederate citizen was not any freer than the Union citizen – and perhaps no less likely to be arrested by military authorities. In fact, the Confederate citizen may have been in some ways less free than his Northern counterpart. For example, freedom to travel within the Confederate states was severely limited by a domestic passport system.
Across the South, widespread rumors alarmed the whites by predicting the slaves were planning some sort of insurrection. Patrols were stepped up. The slaves did become increasingly independent, and resistant to punishment, but historians agree there were no insurrections. In the invaded areas, insubordination was more the norm than was loyalty to the old master; Bell Wiley says, "It was not disloyalty, but the lure of freedom." Many slaves became spies for the North, and large numbers ran away to federal lines.
Lincoln's Emancipation Proclamation, an executive order of the U.S. government on January 1, 1863, changed the legal status of three million slaves in designated areas of the Confederacy from "slave" to "free". The long-term effect was that the Confederacy could not preserve the institution of slavery and lost the use of the core element of its plantation labor force. Slaves were legally freed by the Proclamation, and became free by escaping to federal lines, or by advances of federal troops. Over 200,000 freed slaves were hired by the federal army as teamsters, cooks, launderers and laborers, and eventually as soldiers. Plantation owners, realizing that emancipation would destroy their economic system, sometimes moved their slaves as far as possible out of reach of the Union army. Though the concept was promoted within certain circles of the Union hierarchy during and immediately following the war, no program of reparations for freed slaves was ever attempted. Unlike other Western countries, such as Britain and France, the U.S. government never paid compensation to Southern slave owners for their "lost property".
Most whites were subsistence farmers who traded their surpluses locally. The plantations of the South, with white ownership and an enslaved labor force, produced substantial wealth from cash crops. It supplied two-thirds of the world's cotton, which was in high demand for textiles, along with tobacco, sugar, and naval stores (such as turpentine). These raw materials were exported to factories in Europe and the Northeast. Planters reinvested their profits in more slaves and fresh land, as cotton and tobacco depleted the soil. There was little manufacturing or mining; shipping was controlled by non-southerners.
The plantations that enslaved over three million black people were the principal source of wealth. Most were concentrated in "black belt" plantation areas (because few white families in the poor regions owned slaves). For decades, there had been widespread fear of slave revolts. During the war, extra men were assigned to "home guard" patrol duty and governors sought to keep militia units at home for protection. Historian William Barney reports, "no major slave revolts erupted during the Civil War." Nevertheless, slaves took the opportunity to enlarge their sphere of independence, and when union forces were nearby, many ran off to join them.
Slave labor was applied in industry in a limited way in the Upper South and in a few port cities. One reason for the regional lag in industrial development was top-heavy income distribution. Mass production requires mass markets, and slaves living in small cabins, using self-made tools and outfitted with one suit of work clothes each year of inferior fabric, did not generate consumer demand to sustain local manufactures of any description in the same way as did a mechanized family farm of free labor in the North. The Southern economy was "pre-capitalist" in that slaves were put to work in the largest revenue-producing enterprises, not free labor markets. That labor system as practiced in the American South encompassed paternalism, whether abusive or indulgent, and that meant labor management considerations apart from productivity.
Approximately 85% of both the North and South white populations lived on family farms, both regions were predominantly agricultural, and mid-century industry in both was mostly domestic. But the Southern economy was pre-capitalist in its overwhelming reliance on the agriculture of cash crops to produce wealth, while the great majority of farmers fed themselves and supplied a small local market. Southern cities and industries grew faster than ever before, but the thrust of the rest of the country's exponential growth elsewhere was toward urban industrial development along transportation systems of canals and railroads. The South was following the dominant currents of the American economic mainstream, but at a "great distance" as it lagged in the all-weather modes of transportation that brought cheaper, speedier freight shipment and forged new, expanding inter-regional markets.
A third count of the pre-capitalist Southern economy relates to the cultural setting. The South and southerners did not adopt a work ethic, nor the habits of thrift that marked the rest of the country. It had access to the tools of capitalism, but it did not adopt its culture. The Southern Cause as a national economy in the Confederacy was grounded in "slavery and race, planters and patricians, plain folk and folk culture, cotton and plantations".
The Confederacy started its existence as an agrarian economy with exports, to a world market, of cotton, and, to a lesser extent, tobacco and sugarcane. Local food production included grains, hogs, cattle, and gardens. The cash came from exports but the Southern people spontaneously stopped exports in early 1861 to hasten the impact of "King Cotton", a failed strategy to coerce international support for the Confederacy through its cotton exports. When the blockade was announced, commercial shipping practically ended (the ships could not get insurance), and only a trickle of supplies came via blockade runners. The cutoff of exports was an economic disaster for the South, rendering useless its most valuable properties, its plantations and their enslaved workers. Many planters kept growing cotton, which piled up everywhere, but most turned to food production. All across the region, the lack of repair and maintenance wasted away the physical assets.
The eleven states had produced $155 million (~$4.14 billion in 2022) in manufactured goods in 1860, chiefly from local gristmills, and lumber, processed tobacco, cotton goods and naval stores such as turpentine. The main industrial areas were border cities such as Baltimore, Wheeling, Louisville and St. Louis, that were never under Confederate control. The government did set up munitions factories in the Deep South. Combined with captured munitions and those coming via blockade runners, the armies were kept minimally supplied with weapons. The soldiers suffered from reduced rations, lack of medicines, and the growing shortages of uniforms, shoes and boots. Shortages were much worse for civilians, and the prices of necessities steadily rose.
The Confederacy adopted a tariff or tax on imports of 15%, and imposed it on all imports from other countries, including the United States. The tariff mattered little; the Union blockade minimized commercial traffic through the Confederacy's ports, and very few people paid taxes on goods smuggled from the North. The Confederate government in its entire history collected only $3.5 million in tariff revenue. The lack of adequate financial resources led the Confederacy to finance the war through printing money, which led to high inflation. The Confederacy underwent an economic revolution by centralization and standardization, but it was too little too late as its economy was systematically strangled by blockade and raids.
In peacetime, the South's extensive and connected systems of navigable rivers and coastal access allowed for cheap and easy transportation of agricultural products. The railroad system in the South had developed as a supplement to the navigable rivers to enhance the all-weather shipment of cash crops to market. Railroads tied plantation areas to the nearest river or seaport and so made supply more dependable, lowered costs and increased profits. In the event of invasion, the vast geography of the Confederacy made logistics difficult for the Union. Wherever Union armies invaded, they assigned many of their soldiers to garrison captured areas and to protect rail lines.
At the onset of the Civil War the South had a rail network disjointed and plagued by changes in track gauge as well as lack of interchange. Locomotives and freight cars had fixed axles and could not use tracks of different gauges (widths). Railroads of different gauges leading to the same city required all freight to be off-loaded onto wagons for transport to the connecting railroad station, where it had to await freight cars and a locomotive before proceeding. Centers requiring off-loading included Vicksburg, New Orleans, Montgomery, Wilmington and Richmond. In addition, most rail lines led from coastal or river ports to inland cities, with few lateral railroads. Because of this design limitation, the relatively primitive railroads of the Confederacy were unable to overcome the Union naval blockade of the South's crucial intra-coastal and river routes.
The Confederacy had no plan to expand, protect or encourage its railroads. Southerners' refusal to export the cotton crop in 1861 left railroads bereft of their main source of income. Many lines had to lay off employees; many critical skilled technicians and engineers were permanently lost to military service. In the early years of the war the Confederate government had a hands-off approach to the railroads. Only in mid-1863 did the Confederate government initiate a national policy, and it was confined solely to aiding the war effort. Railroads came under the de facto control of the military. In contrast, the U.S. Congress had authorized military administration of Union-controlled railroad and telegraph systems in January 1862, imposed a standard gauge, and built railroads into the South using that gauge. Confederate armies successfully reoccupying territory could not be resupplied directly by rail as they advanced. The C.S. Congress formally authorized military administration of railroads in February 1865.
In the last year before the end of the war, the Confederate railroad system stood permanently on the verge of collapse. There was no new equipment and raids on both sides systematically destroyed key bridges, as well as locomotives and freight cars. Spare parts were cannibalized; feeder lines were torn up to get replacement rails for trunk lines, and rolling stock wore out through heavy use.
The Confederate army experienced a persistent shortage of horses and mules and requisitioned them with dubious promissory notes given to local farmers and breeders. Union forces paid in real money and found ready sellers in the South. Both armies needed horses for cavalry and for artillery. Mules pulled the wagons. The supply was undermined by an unprecedented epidemic of glanders, a fatal disease that baffled veterinarians. After 1863 the invading Union forces had a policy of shooting all the local horses and mules that they did not need, in order to keep them out of Confederate hands. The Confederate armies and farmers experienced a growing shortage of horses and mules, which hurt the Southern economy and the war effort. The South lost half of its 2.5 million horses and mules; many farmers ended the war with none left. Army horses were used up by hard work, malnourishment, disease and battle wounds; they had a life expectancy of about seven months.
Both the individual Confederate states and later the Confederate government printed Confederate States of America dollars as paper currency in various denominations, with a total face value of $1.5 billion. Much of it was signed by Treasurer Edward C. Elmore. Inflation became rampant as the paper money depreciated and eventually became worthless. The state governments and some localities printed their own paper money, adding to the runaway inflation. Many bills still exist, although in recent years counterfeit copies have proliferated.
The Confederate government initially wanted to finance its war mostly through tariffs on imports, export taxes, and voluntary donations of gold. After the spontaneous imposition of an embargo on cotton sales to Europe in 1861, these sources of revenue dried up and the Confederacy increasingly turned to issuing debt and printing money to pay for war expenses. The Confederate States politicians were worried about angering the general population with hard taxes. A tax increase might disillusion many Southerners, so the Confederacy resorted to printing more money. As a result, inflation increased and remained a problem for the southern states throughout the rest of the war. By April 1863, for example, the cost of flour in Richmond had risen to $100 (~$2,377 in 2022) a barrel and housewives were rioting.
The Confederate government took over the three national mints in its territory: the Charlotte Mint in North Carolina, the Dahlonega Mint in Georgia, and the New Orleans Mint in Louisiana. During 1861 all of these facilities produced small amounts of gold coinage, and the latter half dollars as well. Since the mints used the current dies on hand, all appear to be U.S. issues. However, by comparing slight differences in the dies specialists can distinguish 1861-O half dollars that were minted either under the authority of the U.S. government, the State of Louisiana, or finally the Confederate States. Unlike the gold coins, this issue was produced in significant numbers (over 2.5 million) and is inexpensive in lower grades, although fakes have been made for sale to the public. However, before the New Orleans Mint ceased operation in May 1861, the Confederate government used its own reverse design to strike four half dollars. This made one of the great rarities of American numismatics. A lack of silver and gold precluded further coinage. The Confederacy apparently also experimented with issuing one cent coins, although only 12 were produced by a jeweler in Philadelphia, who was afraid to send them to the South. Like the half dollars, copies were later made as souvenirs.
US coinage was hoarded and did not have any general circulation. U.S. coinage was admitted as legal tender up to $10, as were British sovereigns, French Napoleons and Spanish and Mexican doubloons at a fixed rate of exchange. Confederate money was paper and postage stamps.
By mid-1861, the Union naval blockade virtually shut down the export of cotton and the import of manufactured goods. Food that formerly came overland was cut off.
As women were the ones who remained at home, they had to make do with the lack of food and supplies. They cut back on purchases, used old materials, and planted more flax and peas to provide clothing and food. They used ersatz substitutes when possible, but there was no real coffee, only okra and chicory substitutes. The households were severely hurt by inflation in the cost of everyday items like flour, and the shortages of food, fodder for the animals, and medical supplies for the wounded.
State governments requested that planters grow less cotton and more food, but most refused. When cotton prices soared in Europe, expectations were that Europe would soon intervene to break the blockade and make them rich, but Europe remained neutral. The Georgia legislature imposed cotton quotas, making it a crime to grow an excess. But food shortages only worsened, especially in the towns.
The overall decline in food supplies, made worse by the inadequate transportation system, led to serious shortages and high prices in urban areas. When bacon reached a dollar a pound in 1863, the poor women of Richmond, Atlanta and many other cities began to riot; they broke into shops and warehouses to seize food, as they were angry at ineffective state relief efforts, speculators, and merchants. As wives and widows of soldiers, they were hurt by the inadequate welfare system.
By the end of the war deterioration of the Southern infrastructure was widespread. The number of civilian deaths is unknown. Every Confederate state was affected, but most of the war was fought in Virginia and Tennessee, while Texas and Florida saw the least military action. Much of the damage was caused by direct military action, but most was caused by lack of repairs and upkeep, and by deliberately using up resources. Historians have recently estimated how much of the devastation was caused by military action. Paul Paskoff calculates that Union military operations were conducted in 56% of 645 counties in nine Confederate states (excluding Texas and Florida). These counties contained 63% of the 1860 white population and 64% of the slaves. By the time the fighting took place, undoubtedly some people had fled to safer areas, so the exact population exposed to war is unknown.
The eleven Confederate States in the 1860 United States Census had 297 towns and cities with 835,000 people; of these 162 with 681,000 people were at one point occupied by Union forces. Eleven were destroyed or severely damaged by war action, including Atlanta (with an 1860 population of 9,600), Charleston, Columbia, and Richmond (with prewar populations of 40,500, 8,100, and 37,900, respectively); the eleven contained 115,900 people in the 1860 census, or 14% of the urban South. Historians have not estimated what their actual population was when Union forces arrived. The number of people (as of 1860) who lived in the destroyed towns represented just over 1% of the Confederacy's 1860 population. In addition, 45 court houses were burned (out of 830). The South's agriculture was not highly mechanized. The value of farm implements and machinery in the 1860 Census was $81 million; by 1870, there was 40% less, worth just $48 million. Many old tools had broken through heavy use; new tools were rarely available; even repairs were difficult.
The economic losses affected everyone. Banks and insurance companies were mostly bankrupt. Confederate currency and bonds were worthless. The billions of dollars invested in slaves vanished. Most debts were also left behind. Most farms were intact, but most had lost their horses, mules and cattle; fences and barns were in disrepair. Paskoff shows the loss of farm infrastructure was about the same whether or not fighting took place nearby. The loss of infrastructure and productive capacity meant that rural widows throughout the region faced not only the absence of able-bodied men, but a depleted stock of material resources that they could manage and operate themselves. During four years of warfare, disruption, and blockades, the South used up about half its capital stock. The North, by contrast, absorbed its material losses so effortlessly that it appeared richer at the end of the war than at the beginning.
The rebuilding took years and was hindered by the low price of cotton after the war. Outside investment was essential, especially in railroads. One historian has summarized the collapse of the transportation infrastructure needed for economic recovery:
One of the greatest calamities which confronted Southerners was the havoc wrought on the transportation system. Roads were impassable or nonexistent, and bridges were destroyed or washed away. The important river traffic was at a standstill: levees were broken, channels were blocked, the few steamboats which had not been captured or destroyed were in a state of disrepair, wharves had decayed or were missing, and trained personnel were dead or dispersed. Horses, mules, oxen, carriages, wagons, and carts had nearly all fallen prey at one time or another to the contending armies. The railroads were paralyzed, with most of the companies bankrupt. These lines had been the special target of the enemy. On one stretch of 114 miles in Alabama, every bridge and trestle was destroyed, cross-ties rotten, buildings burned, water-tanks gone, ditches filled up, and tracks grown up in weeds and bushes ... Communication centers like Columbia and Atlanta were in ruins; shops and foundries were wrecked or in disrepair. Even those areas bypassed by battle had been pirated for equipment needed on the battlefront, and the wear and tear of wartime usage without adequate repairs or replacements reduced all to a state of disintegration.
More than 250,000 Confederate soldiers died during the war. Some widows abandoned their family farms and merged into the households of relatives, or even became refugees living in camps with high rates of disease and death. In the Old South, being an "old maid" was an embarrassment to the woman and her family, but after the war, it became almost a norm. Some women welcomed the freedom of not having to marry. Divorce, while never fully accepted, became more common. The concept of the "New Woman" emerged – she was self-sufficient and independent, and stood in sharp contrast to the "Southern Belle" of antebellum lore.
The first official flag of the Confederate States of America—called the "Stars and Bars"—originally had seven stars, representing the first seven states that initially formed the Confederacy. As more states joined, more stars were added, until the total was 13 (two stars were added for the divided states of Kentucky and Missouri). During the First Battle of Bull Run, (First Manassas) it sometimes proved difficult to distinguish the Stars and Bars from the Union flag. To rectify the situation, a separate "Battle Flag" was designed for use by troops in the field. Also known as the "Southern Cross", many variations sprang from the original square configuration.
Although it was never officially adopted by the Confederate government, the popularity of the Southern Cross among both soldiers and the civilian population was a primary reason why it was made the main color feature when a new national flag was adopted in 1863. This new standard—known as the "Stainless Banner"—consisted of a lengthened white field area with a Battle Flag canton. This flag too had its problems when used in military operations as, on a windless day, it could easily be mistaken for a flag of truce or surrender. Thus, in 1865, a modified version of the Stainless Banner was adopted. This final national flag of the Confederacy kept the Battle Flag canton, but shortened the white field and added a vertical red bar to the fly end.
Because of its depiction in the 20th-century and popular media, many people consider the rectangular battle flag with the dark blue bars as being synonymous with "the Confederate Flag", but this flag was never adopted as a Confederate national flag.
The "Confederate Flag" has a color scheme similar to that of the most common Battle Flag design, but is rectangular, not square. The "Confederate Flag" is a highly recognizable symbol of the South in the United States today and continues to be a controversial icon.
Unionism—opposition to the Confederacy—was strong in certain areas within the Confederate States. Southern Unionists (white Southerners who were opposed to the Confederacy) were widespread in the mountain regions of Appalachia and the Ozarks. Unionists, led by Parson Brownlow and Senator Andrew Johnson, took control of East Tennessee in 1863. Unionists also attempted control over western Virginia, but never effectively held more than half of the counties that formed the new state of West Virginia. Union forces captured parts of coastal North Carolina, and at first were largely welcomed by local unionists. That view would change for some, as the occupiers became perceived as oppressive, callous, radical and favorable to Freedmen. Occupiers pillaged, freed slaves, and evicted those who refused to swear loyalty oaths to the Union.
Support for the Confederacy was also low in parts of Texas, where Unionism persisted in certain areas. Claude Elliott estimates that only a third of the population actively supported the Confederacy. Many Unionists supported the Confederacy after the war began, but many others clung to their Unionism throughout the war, especially in the northern counties, German districts in the Texas Hill Country, and majority Mexican areas. According to Ernest Wallace: "This account of a dissatisfied Unionist minority, although historically essential, must be kept in its proper perspective, for throughout the war the overwhelming majority of the people zealously supported the Confederacy ..." Randolph B. Campbell states, "In spite of terrible losses and hardships, most Texans continued throughout the war to support the Confederacy as they had supported secession". Dale Baum in his analysis of Texas politics in the era counters: "This idea of a Confederate Texas united politically against northern adversaries was shaped more by nostalgic fantasies than by wartime realities." He characterizes Texas Civil War history as "a morose story of intragovernmental rivalries coupled with wide-ranging disaffection that prevented effective implementation of state wartime policies".
In Texas, local officials harassed and murdered Unionists and Germans during the Civil War. In Cooke County, Texas, 150 suspected Unionists were arrested; 25 were lynched without trial and 40 more were hanged after a summary trial. Draft resistance was widespread especially among Texans of German or Mexican descent; many of the latter leaving to Mexico. Confederate officials would attempt to hunt down and kill potential draftees who had gone into hiding.
Civil liberties were of small concern in both the North and South. Lincoln and Davis both took a hard line against dissent. Neely explores how the Confederacy became a virtual police state with guards and patrols all about, and a domestic passport system whereby everyone needed official permission each time they wanted to travel. Over 4,000 suspected Unionists were imprisoned in the Confederate States without trial.
Southerner Unionists were also known as Union Loyalists or Lincoln's Loyalists. Within the eleven Confederate states, states such as Tennessee (especially East Tennessee), Virginia (which included West Virginia at the time), and North Carolina had the largest populations of Unionists. Many areas of Southern Appalachia harbored pro-Union sentiment. Up to 100,000 men living in states under Confederate control served in the Union Army or pro-Union guerilla groups. Although Southern Unionists came from all classes, most differed socially, culturally, and economically from the regions dominant pre-war planter class.
The Confederate States of America claimed a total of 2,919 miles (4,698 km) of coastline, thus a large part of its territory lay on the seacoast with level and often sandy or marshy ground. Most of the interior portion consisted of arable farmland, though much was also hilly and mountainous, and the far western territories were deserts. The southern reaches of the Mississippi River bisected the country, and the western half was often referred to as the Trans-Mississippi. The highest point (excluding Arizona and New Mexico) was Guadalupe Peak in Texas at 8,750 feet (2,670 m).
Much of the area claimed by the Confederate States of America had a humid subtropical climate with mild winters and long, hot, humid summers. The climate and terrain varied from vast swamps (such as those in Florida and Louisiana) to semi-arid steppes and arid deserts west of longitude 100 degrees west. The subtropical climate made winters mild but allowed infectious diseases to flourish. Consequently, on both sides more soldiers died from disease than were killed in combat, a fact hardly atypical of pre-World War I conflicts.
The United States Census of 1860 gives a picture of the overall 1860 population for the areas that had joined the Confederacy. The population numbers exclude non-assimilated Indian tribes.
In 1860, the areas that later formed the eleven Confederate states (and including the future West Virginia) had 132,760 (2%) free blacks. Males made up 49% of the total population and females 51% (whites: 49% male, 51% female; slaves: 50% male, 50% female; free blacks: 47% male, 53% female).
The CSA was overwhelmingly rural. Few towns had populations of more than 1,000—the typical county seat had a population of fewer than 500. Cities were rare; of the twenty largest U.S. cities in the 1860 census, only New Orleans lay in Confederate territory—and the Union captured New Orleans in 1862. Only 13 Confederate-controlled cities ranked among the top 100 U.S. cities in 1860, most of them ports whose economic activities vanished or suffered severely in the Union blockade. The population of Richmond swelled after it became the Confederate capital, reaching an estimated 128,000 in 1864. Other Southern cities in the border slave-holding states such as Baltimore, Washington, D.C., Wheeling, Alexandria, Louisville, and St. Louis never came under the control of the Confederate government.
The cities of the Confederacy included most prominently in order of size of population:
See also Atlanta in the Civil War, Charleston, South Carolina, in the Civil War, Nashville in the Civil War, New Orleans in the Civil War, Wilmington, North Carolina, in the American Civil War, and Richmond in the Civil War).
The CSA was overwhelmingly Protestant. Both free and enslaved populations identified with evangelical Protestantism. Baptists and Methodists together formed majorities of both the white and the slave population, becoming the Black church. Freedom of religion and separation of church and state were fully ensured by Confederate laws. Church attendance was very high and chaplains played a major role in the Army.
Most large denominations experienced a North–South split in the prewar era on the issue of slavery. The creation of a new country necessitated independent structures. For example, the Presbyterian Church in the United States split, with much of the new leadership provided by Joseph Ruggles Wilson (father of President Woodrow Wilson). In 1861, he organized the meeting that formed the General Assembly of the Southern Presbyterian Church and served as its chief executive for 37 years. Baptists and Methodists both broke off from their Northern coreligionists over the slavery issue, forming the Southern Baptist Convention and the Methodist Episcopal Church, South, respectively. Elites in the southeast favored the Protestant Episcopal Church in the Confederate States of America, which had reluctantly split from the Episcopal Church in 1861. Other elites were Presbyterians belonging to the 1861-founded Presbyterian Church in the United States. Catholics included an Irish working-class element in coastal cities and an old French element in southern Louisiana. Other insignificant and scattered religious populations included Lutherans, the Holiness movement, other Reformed, other Christian fundamentalists, the Stone-Campbell Restoration Movement, the Churches of Christ, the Latter Day Saint movement, Adventists, Muslims, Jews, Native American animists, deists and irreligious people.
The southern churches met the shortage of Army chaplains by sending missionaries. The Southern Baptists started in 1862 and had a total of 78 missionaries. Presbyterians were even more active with 112 missionaries in January 1865. Other missionaries were funded and supported by the Episcopalians, Methodists, and Lutherans. One result was wave after wave of revivals in the Army.
Military leaders of the Confederacy (with their state or country of birth and highest rank) included: | [
{
"paragraph_id": 0,
"text": "The Confederate States of America (CSA), commonly referred to as the Confederate States (C.S.), the Confederacy, or the South, was an unrecognized breakaway republic in the Southern United States that existed from February 8, 1861, to May 9, 1865. The Confederacy comprised eleven U.S. states that declared secession and warred against the United States during the American Civil War. The states were South Carolina, Mississippi, Florida, Alabama, Georgia, Louisiana, Texas, Virginia, Arkansas, Tennessee, and North Carolina.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The Confederacy was formed on February 8, 1861, by seven slave states: South Carolina, Mississippi, Florida, Alabama, Georgia, Louisiana, and Texas. All seven states were in the Deep South region of the United States, whose economy was heavily dependent upon agriculture, especially cotton, and a plantation system that relied upon enslaved Americans of African descent for labor. Convinced that white supremacy and slavery were threatened by the November 1860 election of Republican Abraham Lincoln to the U.S. presidency on a platform that opposed the expansion of slavery into the western territories, the seven slave states seceded from the United States, with the loyal states becoming known as the Union during the ensuing American Civil War. In the Cornerstone Speech, Confederate Vice President Alexander H. Stephens described its ideology as centrally based \"upon the great truth that the negro is not equal to the white man; that slavery, subordination to the superior race, is his natural and normal condition.\"",
"title": ""
},
{
"paragraph_id": 2,
"text": "Before Lincoln took office on March 4, 1861, a provisional Confederate government was established on February 8, 1861. It was considered illegal by the United States government, and Northerners thought of the Confederates as traitors. After war began in April, four slave states of the Upper South—Virginia, Arkansas, Tennessee, and North Carolina—also joined the Confederacy. Four slave states, Delaware, Maryland, Kentucky, and Missouri, remained in the Union and became known as border states. The Confederacy nevertheless recognized two of them, Missouri and Kentucky, as members, accepting rump state assembly declarations of secession as authorization for full delegations of representatives and senators in the Confederate Congress. In the early part of the Civil War, the Confederacy controlled and governed more than half of Kentucky and the southern portion of Missouri, but these states were never substantially controlled by Confederate forces after 1862, despite the efforts of Confederate shadow governments, which were eventually defeated and expelled from both states. The Union rejected the claims of secession as illegitimate, while the Confederacy fully recognized them.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The Civil War began on April 12, 1861, when the Confederates attacked Fort Sumter, a Union fort in the harbor of Charleston, South Carolina. No foreign government ever recognized the Confederacy as an independent country, although Great Britain and France granted it belligerent status, which allowed Confederate agents to contract with private concerns for weapons and other supplies. By 1865, the Confederacy's civilian government dissolved into chaos: the Confederate States Congress adjourned sine die, effectively ceasing to exist as a legislative body on March 18. After four years of heavy fighting, nearly all Confederate land and naval forces either surrendered or otherwise ceased hostilities by May 1865. The war lacked a clean end date, with Confederate forces surrendering or disbanding sporadically throughout most of 1865. The most significant capitulation was Confederate general Robert E. Lee's surrender to Ulysses S. Grant at Appomattox on April 9, after which any doubt about the war's outcome or the Confederacy's survival was extinguished, although another large army under Confederate general Joseph E. Johnston did not formally surrender to William T. Sherman until April 26. Contemporaneously, President Lincoln was assassinated by Confederate sympathizer John Wilkes Booth on April 15. Confederate President Jefferson Davis's administration declared the Confederacy dissolved on May 5, and acknowledged in later writings that the Confederacy \"disappeared\" in 1865. On May 9, 1865, U.S. President Andrew Johnson officially called an end to the armed resistance in the South.",
"title": ""
},
{
"paragraph_id": 4,
"text": "After the war, during the Reconstruction era, the Confederate states were readmitted to the Congress after each ratified the 13th Amendment to the U.S. Constitution outlawing slavery. Lost Cause mythology, an idealized view of the Confederacy valiantly fighting for a just cause, emerged in the decades after the war among former Confederate generals and politicians, and in organizations such as the United Daughters of the Confederacy and the Sons of Confederate Veterans. Intense periods of Lost Cause activity developed around the turn of the 20th century and during the civil rights movement of the 1950s and 1960s in reaction to growing support for racial equality. Advocates sought to ensure future generations of Southern whites would continue to support white supremacist policies such as the Jim Crow laws through activities such as building Confederate monuments and influencing the authors of textbooks to write on Lost Cause ideology. The modern display of Confederate flags primarily started during the 1948 presidential election, when the battle flag was used by the Dixiecrats. During the Civil Rights Movement, segregationists used it for demonstrations.",
"title": ""
},
{
"paragraph_id": 5,
"text": "On February 22, 1862, the Confederate States Constitution of seven state signatories—Mississippi, South Carolina, Florida, Alabama, Georgia, Louisiana, and Texas—replaced the Provisional Constitution of February 8, 1861, with one stating in its preamble a desire for a \"permanent federal government\". Four additional slave-holding states—Virginia, Arkansas, Tennessee, and North Carolina—declared their secession and joined the Confederacy following a call by U.S. President Abraham Lincoln for troops from each state to recapture Sumter and other seized federal properties in the South.",
"title": "Span of control"
},
{
"paragraph_id": 6,
"text": "Missouri and Kentucky were represented by partisan factions adopting the forms of state governments in the Confederate government of Missouri and Confederate government of Kentucky, and the Confederacy controlled more than half of Kentucky and the southern portion of Missouri early in the war. Neither state's Confederate governments controlled any substantial territory or population in either case after 1862. The antebellum state governments in both maintained their representation in the Union. Also fighting for the Confederacy were two of the \"Five Civilized Tribes\"—the Choctaw and the Chickasaw—in Indian Territory, and a new, but uncontrolled, Confederate Territory of Arizona. Efforts by certain factions in Maryland to secede were halted by federal imposition of martial law; Delaware, though of divided loyalty, did not attempt it. A Unionist government was formed in opposition to the secessionist state government in Richmond and administered the western parts of Virginia that had been occupied by Federal troops. The Restored Government of Virginia later recognized the new state of West Virginia, which was admitted to the Union during the war on June 20, 1863, and relocated to Alexandria for the rest of the war.",
"title": "Span of control"
},
{
"paragraph_id": 7,
"text": "Confederate control over its claimed territory and population in congressional districts steadily shrank from three-quarters to a third during the American Civil War due to the Union's successful overland campaigns, its control of inland waterways into the South, and its blockade of the southern coast. With the Emancipation Proclamation on January 1, 1863, the Union made abolition of slavery a war goal (in addition to reunion). As Union forces moved southward, large numbers of plantation slaves were freed. Many joined the Union lines, enrolling in service as soldiers, teamsters and laborers. The most notable advance was Sherman's \"March to the Sea\" in late 1864. Much of the Confederacy's infrastructure was destroyed, including telegraphs, railroads, and bridges. Plantations in the path of Sherman's forces were severely damaged. Internal movement within the Confederacy became increasingly difficult, weakening its economy and limiting army mobility.",
"title": "Span of control"
},
{
"paragraph_id": 8,
"text": "These losses created an insurmountable disadvantage in men, materiel, and finance. Public support for Confederate President Jefferson Davis's administration eroded over time due to repeated military reverses, economic hardships, and allegations of autocratic government. After four years of campaigning, Richmond was captured by Union forces in April 1865. A few days later General Robert E. Lee surrendered to Union General Ulysses S. Grant, effectively signaling the collapse of the Confederacy. President Davis was captured on May 10, 1865, and jailed for treason, but no trial was ever held.",
"title": "Span of control"
},
{
"paragraph_id": 9,
"text": "The Confederacy was established by the Montgomery Convention in February 1861 by seven states (South Carolina, Mississippi, Alabama, Florida, Georgia, Louisiana, adding Texas in March before Lincoln's inauguration), expanded in May–July 1861 (with Virginia, Arkansas, Tennessee, North Carolina), and disintegrated in April–May 1865. It was formed by delegations from seven slave states of the Lower South that had proclaimed their secession from the Union. After the fighting began in April, four additional slave states seceded and were admitted. Later, two slave states (Missouri and Kentucky) and two territories were given seats in the Confederate Congress.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "Its establishment flowed from and deepened Southern nationalism, which prepared men to fight for \"The Southern Cause\". This \"Cause\" included support for states' rights, tariff policy, and internal improvements, but above all, cultural and financial dependence on the South's slavery-based economy. The convergence of race and slavery, politics, and economics raised almost all South-related policy questions to the status of moral questions over way of life, merging love of things Southern and hatred of things Northern. As the war approached, political parties split, and national churches and interstate families divided along sectional lines. According to historian John M. Coski:",
"title": "History"
},
{
"paragraph_id": 11,
"text": "The statesmen who led the secession movement were unashamed to explicitly cite the defense of slavery as their prime motive ... Acknowledging the centrality of slavery to the Confederacy is essential for understanding the Confederate.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "Southern Democrats had chosen John Breckinridge as their candidate during the U.S. presidential election of 1860, but in no Southern state (other than South Carolina, where the legislature chose the electors) was support for him unanimous, as all of the other states recorded at least some popular votes for one or more of the other three candidates (Abraham Lincoln, Stephen A. Douglas and John Bell). Support for these candidates, collectively, ranged from significant to an outright majority, with extremes running from 25% in Texas to 81% in Missouri. There were minority views everywhere, especially in the upland and plateau areas of the South, being particularly concentrated in western Virginia and eastern Tennessee. The first six signatory states establishing the Confederacy counted about one-fourth its population. They voted 43% for pro-Union candidates. The four states which entered after the attack on Fort Sumter held almost half the population of the Confederacy and voted 53% for pro-Union candidates. The three big turnout states voted extremes. Texas, with 5% of the population, voted 20% for pro-Union candidates. Kentucky and Missouri, with one-fourth the Confederate population, voted a combined 68% for the pro-Union Lincoln, Douglas, and Bell.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "Following South Carolina's unanimous 1860 secession vote, no other Southern states considered the question until 1861, and when they did none had a unanimous vote. All had residents who cast significant numbers of Unionist votes in either the legislature, conventions, popular referendums, or in all three. Voting to remain in the Union did not necessarily mean that individuals were sympathizers with the North. Once fighting began, many of these who voted to remain in the Union, particularly in the Deep South, accepted the majority decision, and supported the Confederacy.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "Many writers have evaluated the Civil War as an American tragedy—a \"Brothers' War\", pitting \"brother against brother, father against son, kin against kin of every degree\".",
"title": "History"
},
{
"paragraph_id": 15,
"text": "According to historian Avery O. Craven in 1950, the Confederate States of America nation, as a state power, was created by secessionists in Southern slave states, who believed that the federal government was making them second-class citizens. They judged the agents of change to be abolitionists and anti-slavery elements in the Republican Party, whom they believed used repeated insult and injury to subject them to intolerable \"humiliation and degradation\". The \"Black Republicans\" (as the Southerners called them) and their allies soon dominated the U.S. House, Senate, and Presidency. On the U.S. Supreme Court, Chief Justice Roger B. Taney (a presumed supporter of slavery) was 83 years old and ailing.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "During the campaign for president in 1860, some secessionists threatened disunion should Lincoln (who opposed the expansion of slavery into the territories) be elected, including William L. Yancey. Yancey toured the North calling for secession as Stephen A. Douglas toured the South calling for union if Lincoln was elected. To the secessionists the Republican intent was clear: to contain slavery within its present bounds and, eventually, to eliminate it entirely. A Lincoln victory presented them with a momentous choice (as they saw it), even before his inauguration—\"the Union without slavery, or slavery without the Union\".",
"title": "History"
},
{
"paragraph_id": 17,
"text": "The new [Confederate] Constitution has put at rest forever all the agitating questions relating to our peculiar institutions—African slavery as it exists among us—the proper status of the negro in our form of civilization. This was the immediate cause of the late rupture and present revolution. Jefferson, in his forecast, had anticipated this, as the \"rock upon which the old Union would split.\" He was right. What was conjecture with him, is now a realized fact. But whether he fully comprehended the great truth upon which that rock stood and stands, may be doubted.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "The prevailing ideas entertained by him and most of the leading statesmen at the time of the formation of the old Constitution were, that the enslavement of the African was in violation of the laws of nature; that it was wrong in principle, socially, morally and politically. It was an evil they knew not well how to deal with; but the general opinion of the men of that day was, that, somehow or other, in the order of Providence, the institution would be evanescent and pass away... Those ideas, however, were fundamentally wrong. They rested upon the assumption of the equality of races. This was an error. It was a sandy foundation, and the idea of a Government built upon it—when the \"storm came and the wind blew, it fell.\"",
"title": "History"
},
{
"paragraph_id": 19,
"text": "Our new government is founded upon exactly the opposite ideas; its foundations are laid, its cornerstone rests, upon the great truth that the negro is not equal to the white man; that slavery, subordination to the superior race, is his natural and normal condition. This, our new government, is the first, in the history of the world, based upon this great physical, philosophical, and moral truth.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "Alexander H. Stephens, speech to The Savannah Theatre. (March 21, 1861)",
"title": "History"
},
{
"paragraph_id": 21,
"text": "The immediate catalyst for secession was the victory of the Republican Party and the election of Abraham Lincoln as president in the 1860 elections. American Civil War historian James M. McPherson suggested that, for Southerners, the most ominous feature of the Republican victories in the congressional and presidential elections of 1860 was the magnitude of those victories: Republicans captured over 60 percent of the Northern vote and three-fourths of its Congressional delegations. The Southern press said that such Republicans represented the anti-slavery portion of the North, \"a party founded on the single sentiment ... of hatred of African slavery\", and now the controlling power in national affairs. The \"Black Republican party\" could overwhelm the status of white supremacy in the South. The New Orleans Delta said of the Republicans, \"It is in fact, essentially, a revolutionary party\" to overthrow slavery. By 1860, sectional disagreements between North and South concerned primarily the status of slavery in the United States. The specific question at issue was whether slavery would be permitted to expand into the western territories, leading to more slave states, or be prevented from doing so, which was widely believed would place slavery on a course of ultimate extinction. Historian Drew Gilpin Faust observed that \"leaders of the secession movement across the South cited slavery as the most compelling reason for southern independence\". Although most white Southerners did not own slaves, the majority supported the institution of slavery and benefited indirectly from the slave society. For struggling yeomen and subsistence farmers, the slave society provided a large class of people ranked lower in the social scale than themselves. Secondary differences related to issues of free speech, runaway slaves, expansion into Cuba, and states' rights.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "Historian Emory Thomas assessed the Confederacy's self-image by studying correspondence sent by the Confederate government in 1861–62 to foreign governments. He found that Confederate diplomacy projected multiple contradictory self-images:",
"title": "History"
},
{
"paragraph_id": 23,
"text": "The Southern nation was by turns a guileless people attacked by a voracious neighbor, an 'established' nation in some temporary difficulty, a collection of bucolic aristocrats making a romantic stand against the banalities of industrial democracy, a cabal of commercial farmers seeking to make a pawn of King Cotton, an apotheosis of nineteenth-century nationalism and revolutionary liberalism, or the ultimate statement of social and economic reaction.",
"title": "History"
},
{
"paragraph_id": 24,
"text": "The Cornerstone Speech is frequently cited in analysis surrounding Confederate ideology. In it, Confederate Vice President Alexander H. Stephens declared that the \"cornerstone\" of the new government \"rest[ed] upon the great truth that the negro is not equal to the white man; that slavery—subordination to the superior race—is his natural and normal condition. This, our new government, is the first, in the history of the world, based upon this great physical, philosophical, and moral truth\". Stephens' speech criticized \"most\" of the Founding Fathers for their views on slavery, accusing them of erroneously assuming that races are equal. He declared that disagreements over the enslavement of African Americans were the \"immediate cause\" of secession and that the Confederate constitution had resolved such issues. Stephens contended that advances and progress in the sciences proved that the Declaration of Independence's view that \"all men are created equal\" was erroneous, while stating that the Confederacy was the first country in the world founded on the principle of white supremacy and that chattel slavery coincided with the Bible's teachings. After the Confederacy's defeat at the hands of the U.S. in the Civil War and the abolition of slavery, he attempted to retroactively deny and retract the opinions he had stated in the speech. Denying his earlier statements that slavery was the Confederacy's cause for leaving the Union, he contended to the contrary that he thought that the war was rooted in constitutional differences; this explanation by Stephens is widely rejected by historians.",
"title": "History"
},
{
"paragraph_id": 25,
"text": "Four of the seceding states, the Deep South states of South Carolina, Mississippi, Georgia, and Texas, issued formal declarations of the causes of their decision; each identified the threat to slaveholders' rights as the cause of, or a major cause of, secession. Georgia also claimed a general Federal policy of favoring Northern over Southern economic interests. Texas mentioned slavery 21 times, but also listed the failure of the federal government to live up to its obligations, in the original annexation agreement, to protect settlers along the exposed western frontier. Texas resolutions further stated that governments of the states and the nation were established \"exclusively by the white race, for themselves and their posterity\". They also stated that although equal civil and political rights applied to all white men, they did not apply to those of the \"African race\", further opining that the end of racial enslavement would \"bring inevitable calamities upon both [races] and desolation upon the fifteen slave-holding states\".",
"title": "History"
},
{
"paragraph_id": 26,
"text": "Alabama did not provide a separate declaration of causes. Instead, the Alabama ordinance stated \"the election of Abraham Lincoln ... by a sectional party, avowedly hostile to the domestic institutions and to the peace and security of the people of the State of Alabama, preceded by many and dangerous infractions of the Constitution of the United States by many of the States and people of the northern section, is a political wrong of so insulting and menacing a character as to justify the people of the State of Alabama in the adoption of prompt and decided measures for their future peace and security\". The ordinance invited \"the slaveholding States of the South, who may approve such purpose, in order to frame a provisional as well as a permanent Government upon the principles of the Constitution of the United States\" to participate in a February 4, 1861 convention in Montgomery, Alabama.",
"title": "History"
},
{
"paragraph_id": 27,
"text": "The secession ordinances of the remaining two states, Florida and Louisiana, simply declared their severing ties with the federal Union, without stating any causes. Afterward, the Florida secession convention formed a committee to draft a declaration of causes, but the committee was discharged before completion of the task. Only an undated, untitled draft remains.",
"title": "History"
},
{
"paragraph_id": 28,
"text": "Four of the Upper South states (Virginia, Arkansas, Tennessee, and North Carolina) rejected secession until after the clash at Ft. Sumter. Virginia's ordinance stated a kinship with the slave-holding states of the Lower South, but did not name the institution itself as a primary reason for its course.",
"title": "History"
},
{
"paragraph_id": 29,
"text": "Arkansas's secession ordinance encompassed a strong objection to the use of military force to preserve the Union as its motivating reason. Before the outbreak of war, the Arkansas Convention had on March 20 given as their first resolution: \"The people of the Northern States have organized a political party, purely sectional in its character, the central and controlling idea of which is hostility to the institution of African slavery, as it exists in the Southern States; and that party has elected a President ... pledged to administer the Government upon principles inconsistent with the rights and subversive of the interests of the Southern States.\"",
"title": "History"
},
{
"paragraph_id": 30,
"text": "North Carolina and Tennessee limited their ordinances to simply withdrawing, although Tennessee went so far as to make clear they wished to make no comment at all on the \"abstract doctrine of secession\".",
"title": "History"
},
{
"paragraph_id": 31,
"text": "In a message to the Confederate Congress on April 29, 1861, Jefferson Davis cited both the tariff and slavery for the South's secession.",
"title": "History"
},
{
"paragraph_id": 32,
"text": "The pro-slavery \"Fire-Eaters\" group of Southern Democrats, calling for immediate secession, were opposed by two factions. \"Cooperationists\" in the Deep South would delay secession until several states left the union, perhaps in a Southern Convention. Under the influence of men such as Texas Governor Sam Houston, delay would have the effect of sustaining the Union. \"Unionists\", especially in the Border South, often former Whigs, appealed to sentimental attachment to the United States. Southern Unionists' favorite presidential candidate was John Bell of Tennessee, sometimes running under an \"Opposition Party\" banner.",
"title": "History"
},
{
"paragraph_id": 33,
"text": "Many secessionists were active politically. Governor William Henry Gist of South Carolina corresponded secretly with other Deep South governors, and most southern governors exchanged clandestine commissioners. Charleston's secessionist \"1860 Association\" published over 200,000 pamphlets to persuade the youth of the South. The most influential were: \"The Doom of Slavery\" and \"The South Alone Should Govern the South\", both by John Townsend of South Carolina; and James D. B. De Bow's \"The Interest of Slavery of the Southern Non-slaveholder\".",
"title": "History"
},
{
"paragraph_id": 34,
"text": "Developments in South Carolina started a chain of events. The foreman of a jury refused the legitimacy of federal courts, so Federal Judge Andrew Magrath ruled that U.S. judicial authority in South Carolina was vacated. A mass meeting in Charleston celebrating the Charleston and Savannah railroad and state cooperation led to the South Carolina legislature to call for a Secession Convention. U.S. Senator James Chesnut, Jr. resigned, as did Senator James Henry Hammond.",
"title": "History"
},
{
"paragraph_id": 35,
"text": "Elections for Secessionist conventions were heated to \"an almost raving pitch, no one dared dissent\", according to historian William W. Freehling. Even once–respected voices, including the Chief Justice of South Carolina, John Belton O'Neall, lost election to the Secession Convention on a Cooperationist ticket. Across the South mobs expelled Yankees and (in Texas) executed German-Americans suspected of loyalty to the United States. Generally, seceding conventions which followed did not call for a referendum to ratify, although Texas, Arkansas, Tennessee, and Virginia's second convention did. Kentucky declared neutrality, while Missouri had its own civil war until the Unionists took power and drove the Confederate legislators out of the state.",
"title": "History"
},
{
"paragraph_id": 36,
"text": "In February, 1861, leading politicians from northern states and border states that had yet to secede met in Washington, DC, for the Peace Conference of 1861. Attendees rejected the Crittenden Compromise and other proposals. Eventually it proposed the Corwin Amendment to the Congress to bring the seceding states back to the Union and to convince the border slave states to remain. It was a proposed amendment to the United States Constitution by Ohio Congressman Thomas Corwin that would shield \"domestic institutions\" of the states (which in 1861 included slavery) from the constitutional amendment process and from abolition or interference by Congress.",
"title": "History"
},
{
"paragraph_id": 37,
"text": "It was passed by the 36th Congress on March 2, 1861. The House approved it by a vote of 133 to 65 and the United States Senate adopted it, with no changes, on a vote of 24 to 12. It was then submitted to the state legislatures for ratification. In his inaugural address Lincoln endorsed the proposed amendment.",
"title": "History"
},
{
"paragraph_id": 38,
"text": "The text was as follows:",
"title": "History"
},
{
"paragraph_id": 39,
"text": "No amendment shall be made to the Constitution which will authorize or give to Congress the power to abolish or interfere, within any State, with the domestic institutions thereof, including that of persons held to labor or service by the laws of said State.",
"title": "History"
},
{
"paragraph_id": 40,
"text": "Had it been ratified by the required number of states prior to 1865, on its face it would have made institutionalized slavery immune to the constitutional amendment procedures and to interference by Congress.",
"title": "History"
},
{
"paragraph_id": 41,
"text": "The first secession state conventions from the Deep South sent representatives to meet at the Montgomery Convention in Montgomery, Alabama, on February 4, 1861. There the fundamental documents of government were promulgated, a provisional government was established, and a representative Congress met for the Confederate States of America.",
"title": "History"
},
{
"paragraph_id": 42,
"text": "The new provisional Confederate President Jefferson Davis issued a call for 100,000 men from the various states' militias to defend the newly formed Confederacy. All Federal property was seized, along with gold bullion and coining dies at the U.S. mints in Charlotte, North Carolina; Dahlonega, Georgia; and New Orleans. The Confederate capital was moved from Montgomery to Richmond, Virginia, in May 1861. On February 22, 1862, Davis was inaugurated as president with a term of six years.",
"title": "History"
},
{
"paragraph_id": 43,
"text": "The newly inaugurated Confederate administration pursued a policy of national territorial integrity, continuing earlier state efforts in 1860 and early 1861 to remove U.S. government presence from within their boundaries. These efforts included taking possession of U.S. courts, custom houses, post offices, and most notably, arsenals and forts. But after the Confederate attack and capture of Fort Sumter in April 1861, Lincoln called up 75,000 of the states' militia to muster under his command. The stated purpose was to re-occupy U.S. properties throughout the South, as the U.S. Congress had not authorized their abandonment. The resistance at Fort Sumter signaled his change of policy from that of the Buchanan Administration. Lincoln's response ignited a firestorm of emotion. The people of both North and South demanded war, with soldiers rushing to their colors in the hundreds of thousands. Four more states (Virginia, North Carolina, Tennessee, and Arkansas) refused Lincoln's call for troops and declared secession, while Kentucky maintained an uneasy \"neutrality\".",
"title": "History"
},
{
"paragraph_id": 44,
"text": "Secessionists argued that the United States Constitution was a contract among sovereign states that could be abandoned at any time without consultation and that each state had a right to secede. After intense debates and statewide votes, seven Deep South cotton states passed secession ordinances by February 1861 (before Abraham Lincoln took office as president), while secession efforts failed in the other eight slave states. Delegates from those seven formed the CSA in February 1861, selecting Jefferson Davis as the provisional president. Unionist talk of reunion failed and Davis began raising a 100,000-man army.",
"title": "History"
},
{
"paragraph_id": 45,
"text": "Initially, some secessionists may have hoped for a peaceful departure. Moderates in the Confederate Constitutional Convention included a provision against importation of slaves from Africa to appeal to the Upper South. Non-slave states might join, but the radicals secured a two-thirds requirement in both houses of Congress to accept them.",
"title": "History"
},
{
"paragraph_id": 46,
"text": "Seven states declared their secession from the United States before Lincoln took office on March 4, 1861. After the Confederate attack on Fort Sumter April 12, 1861, and Lincoln's subsequent call for troops on April 15, four more states declared their secession:",
"title": "History"
},
{
"paragraph_id": 47,
"text": "Kentucky declared neutrality, but after Confederate troops moved in, the state legislature asked for Union troops to drive them out. Delegates from 68 Kentucky counties were sent to the Russellville Convention that signed an Ordinance of Secession. Kentucky was formally admitted into the Confederacy on December 10, 1861, with Bowling Green as its first capital. Early in the war, the Confederacy controlled more than half of Kentucky but largely lost control of the state in 1862. The splinter Confederate government of Kentucky relocated to accompany western Confederate armies and never controlled the state population after 1862. By the end of the war, 90,000 Kentuckians had fought on the side of the Union, compared to 35,000 for the Confederacy.",
"title": "History"
},
{
"paragraph_id": 48,
"text": "In Missouri, a constitutional convention was approved and delegates elected by voters. The convention rejected secession 89–1 on March 19, 1861. The governor maneuvered to take control of the St. Louis Arsenal and restrict Federal movements. This led to a confrontation, and in June federal forces drove him and the General Assembly from Jefferson City. The executive committee of the constitutional convention called the members together in July. The convention declared the state offices vacant and appointed a Unionist interim state government. The exiled governor called a rump session of the former General Assembly together in Neosho and, on October 31, 1861, it passed an ordinance of secession. It is still a matter of debate as to whether a quorum existed for this vote. The Confederate state government was unable to control substantial parts of Missouri territory, effectively only controlling southern Missouri early in the war. It had its capital first at Neosho, then at Cassville, before being driven out of the state. For the remainder of the war, it operated as a government in exile at Marshall, Texas.",
"title": "History"
},
{
"paragraph_id": 49,
"text": "Not having seceded, neither Kentucky nor Missouri was declared in rebellion in Lincoln's Emancipation Proclamation. The Confederacy recognized the pro-Confederate claimants in both Kentucky (December 10, 1861) and Missouri (November 28, 1861) and laid claim to those states, granting them Congressional representation and adding two stars to the Confederate flag. Voting for the representatives was mostly done by Confederate soldiers from Kentucky and Missouri.",
"title": "History"
},
{
"paragraph_id": 50,
"text": "The order of secession resolutions and dates are:",
"title": "History"
},
{
"paragraph_id": 51,
"text": "In Virginia, the populous counties along the Ohio and Pennsylvania borders rejected the Confederacy. Unionists held a Convention in Wheeling in June 1861, establishing a \"restored government\" with a rump legislature, but sentiment in the region remained deeply divided. In the 50 counties that would make up the state of West Virginia, voters from 24 counties had voted for disunion in Virginia's May 23 referendum on the ordinance of secession. In the 1860 Presidential election \"Constitutional Democrat\" Breckenridge had outpolled \"Constitutional Unionist\" Bell in the 50 counties by 1,900 votes, 44% to 42%. Regardless of scholarly disputes over election procedures and results county by county, altogether they simultaneously supplied over 20,000 soldiers to each side of the conflict. Representatives for most of the counties were seated in both state legislatures at Wheeling and at Richmond for the duration of the war.",
"title": "History"
},
{
"paragraph_id": 52,
"text": "Attempts to secede from the Confederacy by some counties in East Tennessee were checked by martial law. Although slaveholding Delaware and Maryland did not secede, citizens from those states exhibited divided loyalties. Regiments of Marylanders fought in Lee's Army of Northern Virginia. Overall, 24,000 men from Maryland joined the Confederate armed forces, compared to 63,000 who joined Union forces.",
"title": "History"
},
{
"paragraph_id": 53,
"text": "Delaware never produced a full regiment for the Confederacy, but neither did it emancipate slaves as did Missouri and West Virginia. District of Columbia citizens made no attempts to secede and through the war years, referendums sponsored by President Lincoln approved systems of compensated emancipation and slave confiscation from \"disloyal citizens\".",
"title": "History"
},
{
"paragraph_id": 54,
"text": "Citizens at Mesilla and Tucson in the southern part of New Mexico Territory formed a secession convention, which voted to join the Confederacy on March 16, 1861, and appointed Dr. Lewis S. Owings as the new territorial governor. They won the Battle of Mesilla and established a territorial government with Mesilla serving as its capital. The Confederacy proclaimed the Confederate Arizona Territory on February 14, 1862, north to the 34th parallel. Marcus H. MacWillie served in both Confederate Congresses as Arizona's delegate. In 1862, the Confederate New Mexico Campaign to take the northern half of the U.S. territory failed and the Confederate territorial government in exile relocated to San Antonio, Texas.",
"title": "History"
},
{
"paragraph_id": 55,
"text": "Confederate supporters in the trans-Mississippi west also claimed portions of the Indian Territory after the United States evacuated the federal forts and installations. Over half of the American Indian troops participating in the Civil War from the Indian Territory supported the Confederacy; troops and one general were enlisted from each tribe. On July 12, 1861, the Confederate government signed a treaty with both the Choctaw and Chickasaw Indian nations. After several battles, Union armies took control of the territory.",
"title": "History"
},
{
"paragraph_id": 56,
"text": "The Indian Territory never formally joined the Confederacy, but it did receive representation in the Confederate Congress. Many Indians from the Territory were integrated into regular Confederate Army units. After 1863, the tribal governments sent representatives to the Confederate Congress: Elias Cornelius Boudinot representing the Cherokee and Samuel Benton Callahan representing the Seminole and Creek. The Cherokee Nation aligned with the Confederacy. They practiced and supported slavery, opposed abolition, and feared their lands would be seized by the Union. After the war, the Indian territory was disestablished, their black slaves were freed, and the tribes lost some of their lands.",
"title": "History"
},
{
"paragraph_id": 57,
"text": "Montgomery, Alabama, served as the capital of the Confederate States of America from February 4 until May 29, 1861, in the Alabama State Capitol. Six states created the Confederate States of America there on February 8, 1861. The Texas delegation was seated at the time, so it is counted in the \"original seven\" states of the Confederacy; it had no roll call vote until after its referendum made secession \"operative\". Two sessions of the Provisional Congress were held in Montgomery, adjourning May 21. The Permanent Constitution was adopted there on March 12, 1861.",
"title": "History"
},
{
"paragraph_id": 58,
"text": "The permanent capital provided for in the Confederate Constitution called for a state cession of a 100 square mile district to the central government. Atlanta, which had not yet supplanted Milledgeville, Georgia, as its state capital, put in a bid noting its central location and rail connections, as did Opelika, Alabama, noting its strategically interior situation, rail connections and nearby deposits of coal and iron.",
"title": "History"
},
{
"paragraph_id": 59,
"text": "Richmond, Virginia, was chosen for the interim capital at the Virginia State Capitol. The move was used by Vice President Stephens and others to encourage other border states to follow Virginia into the Confederacy. In the political moment it was a show of \"defiance and strength\". The war for Southern independence was surely to be fought in Virginia, but it also had the largest Southern military-aged white population, with infrastructure, resources, and supplies required to sustain a war. The Davis Administration's policy was that \"It must be held at all hazards.\"",
"title": "History"
},
{
"paragraph_id": 60,
"text": "The naming of Richmond as the new capital took place on May 30, 1861, and the last two sessions of the Provisional Congress were held in the new capital. The Permanent Confederate Congress and President were elected in the states and army camps on November 6, 1861. The First Congress met in four sessions in Richmond from February 18, 1862, to February 17, 1864. The Second Congress met there in two sessions, from May 2, 1864, to March 18, 1865.",
"title": "History"
},
{
"paragraph_id": 61,
"text": "As war dragged on, Richmond became crowded with training and transfers, logistics and hospitals. Prices rose dramatically despite government efforts at price regulation. A movement in Congress led by Henry S. Foote of Tennessee argued for moving the capital from Richmond. At the approach of Federal armies in mid-1862, the government's archives were readied for removal. As the Wilderness Campaign progressed, Congress authorized Davis to remove the executive department and call Congress to session elsewhere in 1864 and again in 1865. Shortly before the end of the war, the Confederate government evacuated Richmond, planning to relocate farther south. Little came of these plans before Lee's surrender at Appomattox Court House, Virginia on April 9, 1865. Davis and most of his cabinet fled to Danville, Virginia, which served as their headquarters for eight days.",
"title": "History"
},
{
"paragraph_id": 62,
"text": "During the four years of its existence, the Confederate States of America asserted its independence and appointed dozens of diplomatic agents abroad. None were ever officially recognized by a foreign government. The United States government regarded the Southern states as being in rebellion or insurrection and so refused any formal recognition of their status.",
"title": "History"
},
{
"paragraph_id": 63,
"text": "Even before Fort Sumter, U.S. Secretary of State William H. Seward issued formal instructions to the American minister to Britain, Charles Francis Adams:",
"title": "History"
},
{
"paragraph_id": 64,
"text": "[Make] no expressions of harshness or disrespect, or even impatience concerning the seceding States, their agents, or their people, [those States] must always continue to be, equal and honored members of this Federal Union, [their citizens] still are and always must be our kindred and countrymen.",
"title": "History"
},
{
"paragraph_id": 65,
"text": "Seward instructed Adams that if the British government seemed inclined to recognize the Confederacy, or even waver in that regard, it was to receive a sharp warning, with a strong hint of war:",
"title": "History"
},
{
"paragraph_id": 66,
"text": "[if Britain is] tolerating the application of the so-called seceding States, or wavering about it, [they cannot] remain friends with the United States ... if they determine to recognize [the Confederacy], [Britain] may at the same time prepare to enter into alliance with the enemies of this republic.",
"title": "History"
},
{
"paragraph_id": 67,
"text": "The United States government never declared war on those \"kindred and countrymen\" in the Confederacy but conducted its military efforts beginning with a presidential proclamation issued April 15, 1861. It called for troops to recapture forts and suppress what Lincoln later called an \"insurrection and rebellion\".",
"title": "History"
},
{
"paragraph_id": 68,
"text": "Mid-war parleys between the two sides occurred without formal political recognition, though the laws of war predominantly governed military relationships on both sides of uniformed conflict.",
"title": "History"
},
{
"paragraph_id": 69,
"text": "On the part of the Confederacy, immediately following Fort Sumter the Confederate Congress proclaimed that \"war exists between the Confederate States and the Government of the United States, and the States and Territories thereof\". A state of war was not to formally exist between the Confederacy and those states and territories in the United States allowing slavery, although Confederate Rangers were compensated for destruction they could effect there throughout the war.",
"title": "History"
},
{
"paragraph_id": 70,
"text": "Concerning the international status and nationhood of the Confederate States of America, in 1869 the United States Supreme Court in Texas v. White, 74 U.S. (7 Wall.) 700 (1869) ruled Texas' declaration of secession was legally null and void. Jefferson Davis, former President of the Confederacy, and Alexander H. Stephens, its former vice-president, both wrote postwar arguments in favor of secession's legality and the international legitimacy of the Government of the Confederate States of America, most notably Davis' The Rise and Fall of the Confederate Government.",
"title": "History"
},
{
"paragraph_id": 71,
"text": "Once war with the United States began, the Confederacy pinned its hopes for survival on military intervention by Great Britain or France. The Confederate government sent James M. Mason to London and John Slidell to Paris. On their way to Europe in 1861, the U.S. Navy intercepted their ship, the Trent, and forcibly took them to Boston, an international episode known as the Trent Affair. The diplomats were eventually released and continued their voyage to Europe. However, their mission was unsuccessful; historians give them low marks for their poor diplomacy. Neither secured diplomatic recognition for the Confederacy, much less military assistance.",
"title": "History"
},
{
"paragraph_id": 72,
"text": "The Confederates who had believed that \"cotton is king\", that is, that Britain had to support the Confederacy to obtain cotton, proved mistaken. The British had stocks to last over a year and had been developing alternative sources of cotton, most notably India and Egypt. Britain had so much cotton that it was exporting some to France. England was not about to go to war with the U.S. to acquire more cotton at the risk of losing the large quantities of food imported from the North.",
"title": "History"
},
{
"paragraph_id": 73,
"text": "Aside from the purely economic questions, there was also the clamorous ethical debate. Great Britain took pride in being a leader in ending the transatlantic enslavement of Africans, phasing the practice out within its empire starting in 1833 and deploying the Royal Navy to patrol the waters of the middle passage to prevent additional slave ships from reaching the Western Hemisphere. Confederate diplomats found little support for American slavery, cotton trade or not. A series of slave narratives about American slavery was being published in London. It was in London that the first World Anti-Slavery Convention had been held in 1840; it was followed by regular smaller conferences. A string of eloquent and sometimes well-educated black abolitionist speakers crisscrossed England, Scotland, and Ireland. In addition to exposing the reality of America's chattel slavery—some were fugitive slaves—they rebutted the Confederate position that blacks were \"unintellectual, timid, and dependent\", and \"not equal to the white man...the superior race,\" as it was put by Confederate Vice-president Alexander H. Stephens in his famous Cornerstone Speech. Frederick Douglass, Henry Highland Garnet, Sarah Parker Remond, her brother Charles Lenox Remond, James W. C. Pennington, Martin Delany, Samuel Ringgold Ward, and William G. Allen all spent years in Britain, where fugitive slaves were safe and, as Allen said, there was an \"absence of prejudice against color. Here the colored man feels himself among friends, and not among enemies\". One speaker alone, William Wells Brown, gave more than 1,000 lectures on the shame of American chattel slavery.",
"title": "History"
},
{
"paragraph_id": 74,
"text": "Throughout the early years of the war, British foreign secretary Lord John Russell, Emperor Napoleon III of France, and, to a lesser extent, British Prime Minister Lord Palmerston, showed interest in recognition of the Confederacy or at least mediation of the war. British Chancellor of the Exchequer William Gladstone, convinced of the necessity of intervention on the Confederate side based on the successful diplomatic intervention in Second Italian War of Independence against Austria, attempted unsuccessfully to convince Lord Palmerston to intervene. By September 1862 the Union victory at the Battle of Antietam, Lincoln's preliminary Emancipation Proclamation and abolitionist opposition in Britain put an end to these possibilities. The cost to Britain of a war with the U.S. would have been high: the immediate loss of American grain-shipments, the end of British exports to the U.S., and the seizure of billions of pounds invested in American securities. War would have meant higher taxes in Britain, another invasion of Canada, and full-scale worldwide attacks on the British merchant fleet. Outright recognition would have meant certain war with the United States. In mid-1862, fears of a race war (as had transpired in the Haitian Revolution of 1791–1804) led to the British considering intervention for humanitarian reasons. Lincoln's Emancipation Proclamation did not lead to interracial violence, let alone a bloodbath, but it did give the friends of the Union strong talking points in the arguments that raged across Britain.",
"title": "History"
},
{
"paragraph_id": 75,
"text": "John Slidell, the Confederate States emissary to France, succeeded in negotiating a loan of $15,000,000 from Erlanger and other French capitalists. The money went to buy ironclad warships, and military supplies that came in with blockade runners. The British government did allow the construction of blockade runners in Britain; they were owned and operated by British financiers and ship owners; a few were owned and operated by the Confederacy. The British investors' goal was to get highly profitable cotton.",
"title": "History"
},
{
"paragraph_id": 76,
"text": "Several European nations maintained diplomats in place who had been appointed to the U.S., but no country appointed any diplomat to the Confederacy. Those nations recognized the Union and Confederate sides as belligerents. In 1863 the Confederacy expelled European diplomatic missions for advising their resident subjects to refuse to serve in the Confederate army. Both Confederate and Union agents were allowed to work openly in British territories. Some state governments in northern Mexico negotiated local agreements to cover trade on the Texas border. The Confederacy appointed Ambrose Dudley Mann as special agent to the Holy See on September 24, 1863. But the Holy See never released a formal statement supporting or recognizing the Confederacy. In November 1863, Mann met Pope Pius IX in person and received a letter supposedly addressed \"to the Illustrious and Honorable Jefferson Davis, President of the Confederate States of America\"; Mann had mistranslated the address. In his report to Richmond, Mann claimed a great diplomatic achievement for himself, asserting the letter was \"a positive recognition of our Government\". The letter was indeed used in propaganda, but Confederate Secretary of State Judah P. Benjamin told Mann it was \"a mere inferential recognition, unconnected with political action or the regular establishment of diplomatic relations\" and thus did not assign it the weight of formal recognition.",
"title": "History"
},
{
"paragraph_id": 77,
"text": "Nevertheless, the Confederacy was seen internationally as a serious attempt at nationhood, and European governments sent military observers, both official and unofficial, to assess whether there had been a de facto establishment of independence. These observers included Arthur Lyon Fremantle of the British Coldstream Guards, who entered the Confederacy via Mexico, Fitzgerald Ross of the Austrian Hussars, and Justus Scheibert of the Prussian Army. European travelers visited and wrote accounts for publication. Importantly in 1862, the Frenchman Charles Girard's Seven months in the rebel states during the North American War testified \"this government ... is no longer a trial government ... but really a normal government, the expression of popular will\". Fremantle went on to write in his book Three Months in the Southern States that he had:",
"title": "History"
},
{
"paragraph_id": 78,
"text": "...not attempted to conceal any of the peculiarities or defects of the Southern people. Many persons will doubtless highly disapprove of some of their customs and habits in the wilder portion of the country; but I think no generous man, whatever may be his political opinions, can do otherwise than admire the courage, energy, and patriotism of the whole population, and the skill of its leaders, in this struggle against great odds. And I am also of opinion that many will agree with me in thinking that a people in which all ranks and both sexes display a unanimity and a heroism which can never have been surpassed in the history of the world, is destined, sooner or later, to become a great and independent nation.",
"title": "History"
},
{
"paragraph_id": 79,
"text": "French Emperor Napoleon III assured Confederate diplomat John Slidell that he would make \"direct proposition\" to Britain for joint recognition. The Emperor made the same assurance to British Members of Parliament John A. Roebuck and John A. Lindsay. Roebuck in turn publicly prepared a bill to submit to Parliament June 30 supporting joint Anglo-French recognition of the Confederacy. \"Southerners had a right to be optimistic, or at least hopeful, that their revolution would prevail, or at least endure.\" Following the double disasters at Vicksburg and Gettysburg in July 1863, the Confederates \"suffered a severe loss of confidence in themselves\" and withdrew into an interior defensive position. There would be no help from the Europeans.",
"title": "History"
},
{
"paragraph_id": 80,
"text": "By December 1864, Davis considered sacrificing slavery in order to enlist recognition and aid from Paris and London; he secretly sent Duncan F. Kenner to Europe with a message that the war was fought solely for \"the vindication of our rights to self-government and independence\" and that \"no sacrifice is too great, save that of honor\". The message stated that if the French or British governments made their recognition conditional on anything at all, the Confederacy would consent to such terms. Davis's message could not explicitly acknowledge that slavery was on the bargaining table due to still-strong domestic support for slavery among the wealthy and politically influential. European leaders all saw that the Confederacy was on the verge of total defeat.",
"title": "History"
},
{
"paragraph_id": 81,
"text": "The Confederacy's biggest foreign policy successes were with Cuba and Brazil. Militarily this meant little during the war. Brazil represented the \"peoples most identical to us in Institutions\", in which slavery remained legal until the 1880s. Cuba was a Spanish colony and the Captain–General of Cuba declared in writing that Confederate ships were welcome, and would be protected in Cuban ports. They were also welcome in Brazilian ports; slavery was legal throughout Brazil, and the abolitionist movement was small. After the end of the war, Brazil was the primary destination of those Southerners who wanted to continue living in a slave society, where, as one immigrant remarked, Confederado slaves were cheap. Historians speculate that if the Confederacy had achieved independence, it probably would have tried to acquire Cuba as a base of expansion.",
"title": "History"
},
{
"paragraph_id": 82,
"text": "Most soldiers who joined Confederate national or state military units joined voluntarily. Perman (2010) says historians are of two minds on why millions of soldiers seemed so eager to fight, suffer and die over four years:",
"title": "History"
},
{
"paragraph_id": 83,
"text": "Some historians emphasize that Civil War soldiers were driven by political ideology, holding firm beliefs about the importance of liberty, Union, or state rights, or about the need to protect or to destroy slavery. Others point to less overtly political reasons to fight, such as the defense of one's home and family, or the honor and brotherhood to be preserved when fighting alongside other men. Most historians agree that, no matter what he thought about when he went into the war, the experience of combat affected him profoundly and sometimes affected his reasons for continuing to fight.",
"title": "History"
},
{
"paragraph_id": 84,
"text": "Civil War historian E. Merton Coulter wrote that for those who would secure its independence, \"The Confederacy was unfortunate in its failure to work out a general strategy for the whole war\". Aggressive strategy called for offensive force concentration. Defensive strategy sought dispersal to meet demands of locally minded governors. The controlling philosophy evolved into a combination \"dispersal with a defensive concentration around Richmond\". The Davis administration considered the war purely defensive, a \"simple demand that the people of the United States would cease to war upon us\". Historian James M. McPherson is a critic of Lee's offensive strategy: \"Lee pursued a faulty military strategy that ensured Confederate defeat\".",
"title": "History"
},
{
"paragraph_id": 85,
"text": "As the Confederate government lost control of territory in campaign after campaign, it was said that \"the vast size of the Confederacy would make its conquest impossible\". The enemy would be struck down by the same elements which so often debilitated or destroyed visitors and transplants in the South. Heat exhaustion, sunstroke, endemic diseases such as malaria and typhoid would match the destructive effectiveness of the Moscow winter on the invading armies of Napoleon.",
"title": "History"
},
{
"paragraph_id": 86,
"text": "Early in the war both sides believed that one great battle would decide the conflict; the Confederates won a surprise victory at the First Battle of Bull Run, also known as First Manassas (the name used by Confederate forces). It drove the Confederate people \"insane with joy\"; the public demanded a forward movement to capture Washington, relocate the Confederate capital there, and admit Maryland to the Confederacy. A council of war by the victorious Confederate generals decided not to advance against larger numbers of fresh Federal troops in defensive positions. Davis did not countermand it. Following the Confederate incursion into Maryland halted at the Battle of Antietam in October 1862, generals proposed concentrating forces from state commands to re-invade the north. Nothing came of it. Again in mid-1863 at his incursion into Pennsylvania, Lee requested of Davis that Beauregard simultaneously attack Washington with troops taken from the Carolinas. But the troops there remained in place during the Gettysburg Campaign.",
"title": "History"
},
{
"paragraph_id": 87,
"text": "The eleven states of the Confederacy were outnumbered by the North about four-to-one in military manpower. It was overmatched far more in military equipment, industrial facilities, railroads for transport, and wagons supplying the front.",
"title": "History"
},
{
"paragraph_id": 88,
"text": "Confederates slowed the Yankee invaders, at heavy cost to the Southern infrastructure. The Confederates burned bridges, laid land mines in the roads, and made harbors inlets and inland waterways unusable with sunken mines (called \"torpedoes\" at the time). Coulter reports:",
"title": "History"
},
{
"paragraph_id": 89,
"text": "Rangers in twenty to fifty-man units were awarded 50% valuation for property destroyed behind Union lines, regardless of location or loyalty. As Federals occupied the South, objections by loyal Confederate concerning Ranger horse-stealing and indiscriminate scorched earth tactics behind Union lines led to Congress abolishing the Ranger service two years later.",
"title": "History"
},
{
"paragraph_id": 90,
"text": "The Confederacy relied on external sources for war materials. The first came from trade with the enemy. \"Vast amounts of war supplies\" came through Kentucky, and thereafter, western armies were \"to a very considerable extent\" provisioned with illicit trade via Federal agents and northern private traders. But that trade was interrupted in the first year of war by Admiral Porter's river gunboats as they gained dominance along navigable rivers north–south and east–west. Overseas blockade running then came to be of \"outstanding importance\". On April 17, President Davis called on privateer raiders, the \"militia of the sea\", to wage war on U.S. seaborne commerce. Despite noteworthy effort, over the course of the war the Confederacy was found unable to match the Union in ships and seamanship, materials and marine construction.",
"title": "History"
},
{
"paragraph_id": 91,
"text": "An inescapable obstacle to success in the warfare of mass armies was the Confederacy's lack of manpower, and sufficient numbers of disciplined, equipped troops in the field at the point of contact with the enemy. During the winter of 1862–63, Lee observed that none of his famous victories had resulted in the destruction of the opposing army. He lacked reserve troops to exploit an advantage on the battlefield as Napoleon had done. Lee explained, \"More than once have most promising opportunities been lost for want of men to take advantage of them, and victory itself had been made to put on the appearance of defeat, because our diminished and exhausted troops have been unable to renew a successful struggle against fresh numbers of the enemy.\"",
"title": "History"
},
{
"paragraph_id": 92,
"text": "The military armed forces of the Confederacy comprised three branches: Army, Navy and Marine Corps.",
"title": "History"
},
{
"paragraph_id": 93,
"text": "The Confederate military leadership included many veterans from the United States Army and United States Navy who had resigned their Federal commissions and were appointed to senior positions. Many had served in the Mexican–American War (including Robert E. Lee and Jefferson Davis), but some such as Leonidas Polk (who graduated from West Point but did not serve in the Army) had little or no experience.",
"title": "History"
},
{
"paragraph_id": 94,
"text": "The Confederate officer corps consisted of men from both slave-owning and non-slave-owning families. The Confederacy appointed junior and field grade officers by election from the enlisted ranks. Although no Army service academy was established for the Confederacy, some colleges (such as The Citadel and Virginia Military Institute) maintained cadet corps that trained Confederate military leadership. A naval academy was established at Drewry's Bluff, Virginia in 1863, but no midshipmen graduated before the Confederacy's end.",
"title": "History"
},
{
"paragraph_id": 95,
"text": "Most soldiers were white males aged between 16 and 28. The median year of birth was 1838, so half the soldiers were 23 or older by 1861. In early 1862, the Confederate Army was allowed to disintegrate for two months following expiration of short-term enlistments. Most of those in uniform would not re-enlist following their one-year commitment, so on April 16, 1862, the Confederate Congress enacted the first mass conscription on the North American continent. (The U.S. Congress followed a year later on March 3, 1863, with the Enrollment Act.) Rather than a universal draft, the initial program was a selective service with physical, religious, professional and industrial exemptions. These were narrowed as the war progressed. Initially substitutes were permitted, but by December 1863 these were disallowed. In September 1862 the age limit was increased from 35 to 45 and by February 1864, all men under 18 and over 45 were conscripted to form a reserve for state defense inside state borders. By March 1864, the Superintendent of Conscription reported that all across the Confederacy, every officer in constituted authority, man and woman, \"engaged in opposing the enrolling officer in the execution of his duties\". Although challenged in the state courts, the Confederate State Supreme Courts routinely rejected legal challenges to conscription.",
"title": "History"
},
{
"paragraph_id": 96,
"text": "Many thousands of slaves served as personal servants to their owner, or were hired as laborers, cooks, and pioneers. Some freed blacks and men of color served in local state militia units of the Confederacy, primarily in Louisiana and South Carolina, but their officers deployed them for \"local defense, not combat\". Depleted by casualties and desertions, the military suffered chronic manpower shortages. In early 1865, the Confederate Congress, influenced by the public support by General Lee, approved the recruitment of black infantry units. Contrary to Lee's and Davis's recommendations, the Congress refused \"to guarantee the freedom of black volunteers\". No more than two hundred black combat troops were ever raised.",
"title": "History"
},
{
"paragraph_id": 97,
"text": "The immediate onset of war meant that it was fought by the \"Provisional\" or \"Volunteer Army\". State governors resisted concentrating a national effort. Several wanted a strong state army for self-defense. Others feared large \"Provisional\" armies answering only to Davis. When filling the Confederate government's call for 100,000 men, another 200,000 were turned away by accepting only those enlisted \"for the duration\" or twelve-month volunteers who brought their own arms or horses.",
"title": "History"
},
{
"paragraph_id": 98,
"text": "It was important to raise troops; it was just as important to provide capable officers to command them. With few exceptions the Confederacy secured excellent general officers. Efficiency in the lower officers was \"greater than could have been reasonably expected\". As with the Federals, political appointees could be indifferent. Otherwise, the officer corps was governor-appointed or elected by unit enlisted. Promotion to fill vacancies was made internally regardless of merit, even if better officers were immediately available.",
"title": "History"
},
{
"paragraph_id": 99,
"text": "Anticipating the need for more \"duration\" men, in January 1862 Congress provided for company level recruiters to return home for two months, but their efforts met little success on the heels of Confederate battlefield defeats in February. Congress allowed for Davis to require numbers of recruits from each governor to supply the volunteer shortfall. States responded by passing their own draft laws.",
"title": "History"
},
{
"paragraph_id": 100,
"text": "The veteran Confederate army of early 1862 was mostly twelve-month volunteers with terms about to expire. Enlisted reorganization elections disintegrated the army for two months. Officers pleaded with the ranks to re-enlist, but a majority did not. Those remaining elected majors and colonels whose performance led to officer review boards in October. The boards caused a \"rapid and widespread\" thinning out of 1,700 incompetent officers. Troops thereafter would elect only second lieutenants.",
"title": "History"
},
{
"paragraph_id": 101,
"text": "In early 1862, the popular press suggested the Confederacy required a million men under arms. But veteran soldiers were not re-enlisting, and earlier secessionist volunteers did not reappear to serve in war. One Macon, Georgia, newspaper asked how two million brave fighting men of the South were about to be overcome by four million northerners who were said to be cowards.",
"title": "History"
},
{
"paragraph_id": 102,
"text": "The Confederacy passed the first American law of national conscription on April 16, 1862. The white males of the Confederate States from 18 to 35 were declared members of the Confederate army for three years, and all men then enlisted were extended to a three-year term. They would serve only in units and under officers of their state. Those under 18 and over 35 could substitute for conscripts, in September those from 35 to 45 became conscripts. The cry of \"rich man's war and a poor man's fight\" led Congress to abolish the substitute system altogether in December 1863. All principals benefiting earlier were made eligible for service. By February 1864, the age bracket was made 17 to 50, those under eighteen and over forty-five to be limited to in-state duty.",
"title": "History"
},
{
"paragraph_id": 103,
"text": "Confederate conscription was not universal; it was a selective service. The First Conscription Act of April 1862 exempted occupations related to transportation, communication, industry, ministers, teaching and physical fitness. The Second Conscription Act of October 1862 expanded exemptions in industry, agriculture and conscientious objection. Exemption fraud proliferated in medical examinations, army furloughs, churches, schools, apothecaries and newspapers.",
"title": "History"
},
{
"paragraph_id": 104,
"text": "Rich men's sons were appointed to the socially outcast \"overseer\" occupation, but the measure was received in the country with \"universal odium\". The legislative vehicle was the controversial Twenty Negro Law that specifically exempted one white overseer or owner for every plantation with at least 20 slaves. Backpedaling six months later, Congress provided overseers under 45 could be exempted only if they held the occupation before the first Conscription Act. The number of officials under state exemptions appointed by state Governor patronage expanded significantly. By law, substitutes could not be subject to conscription, but instead of adding to Confederate manpower, unit officers in the field reported that over-50 and under-17-year-old substitutes made up to 90% of the desertions.",
"title": "History"
},
{
"paragraph_id": 105,
"text": "The Conscription Act of February 1864 \"radically changed the whole system\" of selection. It abolished industrial exemptions, placing detail authority in President Davis. As the shame of conscription was greater than a felony conviction, the system brought in \"about as many volunteers as it did conscripts.\" Many men in otherwise \"bombproof\" positions were enlisted in one way or another, nearly 160,000 additional volunteers and conscripts in uniform. Still there was shirking. To administer the draft, a Bureau of Conscription was set up to use state officers, as state Governors would allow. It had a checkered career of \"contention, opposition and futility\". Armies appointed alternative military \"recruiters\" to bring in the out-of-uniform 17–50-year-old conscripts and deserters. Nearly 3,000 officers were tasked with the job. By late 1864, Lee was calling for more troops. \"Our ranks are constantly diminishing by battle and disease, and few recruits are received; the consequences are inevitable.\" By March 1865 conscription was to be administered by generals of the state reserves calling out men over 45 and under 18 years old. All exemptions were abolished. These regiments were assigned to recruit conscripts ages 17–50, recover deserters, and repel enemy cavalry raids. The service retained men who had lost but one arm or a leg in home guards. Ultimately, conscription was a failure, and its main value was in goading men to volunteer.",
"title": "History"
},
{
"paragraph_id": 106,
"text": "The survival of the Confederacy depended on a strong base of civilians and soldiers devoted to victory. The soldiers performed well, though increasing numbers deserted in the last year of fighting, and the Confederacy never succeeded in replacing casualties as the Union could. The civilians, although enthusiastic in 1861–62, seem to have lost faith in the future of the Confederacy by 1864, and instead looked to protect their homes and communities. As Rable explains, \"This contraction of civic vision was more than a crabbed libertarianism; it represented an increasingly widespread disillusionment with the Confederate experiment.\"",
"title": "History"
},
{
"paragraph_id": 107,
"text": "The American Civil War broke out in April 1861 with a Confederate victory at the Battle of Fort Sumter in Charleston.",
"title": "History"
},
{
"paragraph_id": 108,
"text": "In January, President James Buchanan had attempted to resupply the garrison with the steamship, Star of the West, but Confederate artillery drove it away. In March, President Lincoln notified South Carolina Governor Pickens that without Confederate resistance to the resupply there would be no military reinforcement without further notice, but Lincoln prepared to force resupply if it were not allowed. Confederate President Davis, in cabinet, decided to seize Fort Sumter before the relief fleet arrived, and on April 12, 1861, General Beauregard forced its surrender.",
"title": "History"
},
{
"paragraph_id": 109,
"text": "Following Sumter, Lincoln directed states to provide 75,000 troops for three months to recapture the Charleston Harbor forts and all other federal property. This emboldened secessionists in Virginia, Arkansas, Tennessee and North Carolina to secede rather than provide troops to march into neighboring Southern states. In May, Federal troops crossed into Confederate territory along the entire border from the Chesapeake Bay to New Mexico. The first battles were Confederate victories at Big Bethel (Bethel Church, Virginia), First Bull Run (First Manassas) in Virginia July and in August, Wilson's Creek (Oak Hills) in Missouri. At all three, Confederate forces could not follow up their victory due to inadequate supply and shortages of fresh troops to exploit their successes. Following each battle, Federals maintained a military presence and occupied Washington, DC; Fort Monroe, Virginia; and Springfield, Missouri. Both North and South began training up armies for major fighting the next year. Union General George B. McClellan's forces gained possession of much of northwestern Virginia in mid-1861, concentrating on towns and roads; the interior was too large to control and became the center of guerrilla activity. General Robert E. Lee was defeated at Cheat Mountain in September and no serious Confederate advance in western Virginia occurred until the next year.",
"title": "History"
},
{
"paragraph_id": 110,
"text": "Meanwhile, the Union Navy seized control of much of the Confederate coastline from Virginia to South Carolina. It took over plantations and the abandoned slaves. Federals there began a war-long policy of burning grain supplies up rivers into the interior wherever they could not occupy. The Union Navy began a blockade of the major southern ports and prepared an invasion of Louisiana to capture New Orleans in early 1862.",
"title": "History"
},
{
"paragraph_id": 111,
"text": "The victories of 1861 were followed by a series of defeats east and west in early 1862. To restore the Union by military force, the Federal strategy was to (1) secure the Mississippi River, (2) seize or close Confederate ports, and (3) march on Richmond. To secure independence, the Confederate intent was to (1) repel the invader on all fronts, costing him blood and treasure, and (2) carry the war into the North by two offensives in time to affect the mid-term elections.",
"title": "History"
},
{
"paragraph_id": 112,
"text": "Much of northwestern Virginia was under Federal control. In February and March, most of Missouri and Kentucky were Union \"occupied, consolidated, and used as staging areas for advances further South\". Following the repulse of a Confederate counterattack at the Battle of Shiloh, Tennessee, permanent Federal occupation expanded west, south and east. Confederate forces repositioned south along the Mississippi River to Memphis, Tennessee, where at the naval Battle of Memphis, its River Defense Fleet was sunk. Confederates withdrew from northern Mississippi and northern Alabama. New Orleans was captured April 29 by a combined Army-Navy force under U.S. Admiral David Farragut, and the Confederacy lost control of the mouth of the Mississippi River. It had to concede extensive agricultural resources that had supported the Union's sea-supplied logistics base.",
"title": "History"
},
{
"paragraph_id": 113,
"text": "Although Confederates had suffered major reverses everywhere, as of the end of April the Confederacy still controlled territory holding 72% of its population. Federal forces disrupted Missouri and Arkansas; they had broken through in western Virginia, Kentucky, Tennessee and Louisiana. Along the Confederacy's shores, Union forces had closed ports and made garrisoned lodgments on every coastal Confederate state except Alabama and Texas. Although scholars sometimes assess the Union blockade as ineffectual under international law until the last few months of the war, from the first months it disrupted Confederate privateers, making it \"almost impossible to bring their prizes into Confederate ports\". British firms developed small fleets of blockade running companies, such as John Fraser and Company and S. Isaac, Campbell & Company while the Ordnance Department secured its own blockade runners for dedicated munitions cargoes.",
"title": "History"
},
{
"paragraph_id": 114,
"text": "During the Civil War fleets of armored warships were deployed for the first time in sustained blockades at sea. After some success against the Union blockade, in March the ironclad CSS Virginia was forced into port and burned by Confederates at their retreat. Despite several attempts mounted from their port cities, CSA naval forces were unable to break the Union blockade. Attempts were made by Commodore Josiah Tattnall III's ironclads from Savannah in 1862 with the CSS Atlanta. Secretary of the Navy Stephen Mallory placed his hopes in a European-built ironclad fleet, but they were never realized. On the other hand, four new English-built commerce raiders served the Confederacy, and several fast blockade runners were sold in Confederate ports. They were converted into commerce-raiding cruisers, and manned by their British crews.",
"title": "History"
},
{
"paragraph_id": 115,
"text": "In the east, Union forces could not close on Richmond. General McClellan landed his army on the Lower Peninsula of Virginia. Lee subsequently ended that threat from the east, then Union General John Pope attacked overland from the north only to be repulsed at Second Bull Run (Second Manassas). Lee's strike north was turned back at Antietam MD, then Union Major General Ambrose Burnside's offensive was disastrously ended at Fredericksburg VA in December. Both armies then turned to winter quarters to recruit and train for the coming spring.",
"title": "History"
},
{
"paragraph_id": 116,
"text": "In an attempt to seize the initiative, reprove, protect farms in mid-growing season and influence U.S. Congressional elections, two major Confederate incursions into Union territory had been launched in August and September 1862. Both Braxton Bragg's invasion of Kentucky and Lee's invasion of Maryland were decisively repulsed, leaving Confederates in control of but 63% of its population. Civil War scholar Allan Nevins argues that 1862 was the strategic high-water mark of the Confederacy. The failures of the two invasions were attributed to the same irrecoverable shortcomings: lack of manpower at the front, lack of supplies including serviceable shoes, and exhaustion after long marches without adequate food. Also in September Confederate General William W. Loring pushed Federal forces from Charleston, Virginia, and the Kanawha Valley in western Virginia, but lacking reinforcements Loring abandoned his position and by November the region was back in Federal control.",
"title": "History"
},
{
"paragraph_id": 117,
"text": "The failed Middle Tennessee campaign was ended January 2, 1863, at the inconclusive Battle of Stones River (Murfreesboro), both sides losing the largest percentage of casualties suffered during the war. It was followed by another strategic withdrawal by Confederate forces. The Confederacy won a significant victory April 1863, repulsing the Federal advance on Richmond at Chancellorsville, but the Union consolidated positions along the Virginia coast and the Chesapeake Bay.",
"title": "History"
},
{
"paragraph_id": 118,
"text": "Without an effective answer to Federal gunboats, river transport and supply, the Confederacy lost the Mississippi River following the capture of Vicksburg, Mississippi, and Port Hudson in July, ending Southern access to the trans-Mississippi West. July brought short-lived counters, Morgan's Raid into Ohio and the New York City draft riots. Robert E. Lee's strike into Pennsylvania was repulsed at Gettysburg, Pennsylvania despite Pickett's famous charge and other acts of valor. Southern newspapers assessed the campaign as \"The Confederates did not gain a victory, neither did the enemy.\"",
"title": "History"
},
{
"paragraph_id": 119,
"text": "September and November left Confederates yielding Chattanooga, Tennessee, the gateway to the lower south. For the remainder of the war fighting was restricted inside the South, resulting in a slow but continuous loss of territory. In early 1864, the Confederacy still controlled 53% of its population, but it withdrew further to reestablish defensive positions. Union offensives continued with Sherman's March to the Sea to take Savannah and Grant's Wilderness Campaign to encircle Richmond and besiege Lee's army at Petersburg.",
"title": "History"
},
{
"paragraph_id": 120,
"text": "In April 1863, the C.S. Congress authorized a uniformed Volunteer Navy, many of whom were British. The Confederacy had altogether eighteen commerce-destroying cruisers, which seriously disrupted Federal commerce at sea and increased shipping insurance rates 900%. Commodore Tattnall again unsuccessfully attempted to break the Union blockade on the Savannah River in Georgia with an ironclad in 1863. Beginning in April 1864 the ironclad CSS Albemarle engaged Union gunboats for six months on the Roanoke River in North Carolina. The Federals closed Mobile Bay by sea-based amphibious assault in August, ending Gulf coast trade east of the Mississippi River. In December, the Battle of Nashville ended Confederate operations in the western theater.",
"title": "History"
},
{
"paragraph_id": 121,
"text": "Large numbers of families relocated to safer places, usually remote rural areas, bringing along household slaves if they had any. Mary Massey argues these elite exiles introduced an element of defeatism into the southern outlook.",
"title": "History"
},
{
"paragraph_id": 122,
"text": "The first three months of 1865 saw the Federal Carolinas Campaign, devastating a wide swath of the remaining Confederate heartland. The \"breadbasket of the Confederacy\" in the Great Valley of Virginia was occupied by Philip Sheridan. The Union Blockade captured Fort Fisher in North Carolina, and Sherman finally took Charleston, South Carolina, by land attack.",
"title": "History"
},
{
"paragraph_id": 123,
"text": "The Confederacy controlled no ports, harbors or navigable rivers. Railroads were captured or had ceased operating. Its major food-producing regions had been war-ravaged or occupied. Its administration survived in only three pockets of territory holding only one-third of its population. Its armies were defeated or disbanding. At the February 1865 Hampton Roads Conference with Lincoln, senior Confederate officials rejected his invitation to restore the Union with compensation for emancipated slaves. The three pockets of unoccupied Confederacy were southern Virginia—North Carolina, central Alabama—Florida, and Texas, the latter two areas less from any notion of resistance than from the disinterest of Federal forces to occupy them. The Davis policy was independence or nothing, while Lee's army was wracked by disease and desertion, barely holding the trenches defending Jefferson Davis' capital.",
"title": "History"
},
{
"paragraph_id": 124,
"text": "The Confederacy's last remaining blockade-running port, Wilmington, North Carolina, was lost. When the Union broke through Lee's lines at Petersburg, Richmond fell immediately. Lee surrendered a remnant of 50,000 from the Army of Northern Virginia at Appomattox Court House, Virginia, on April 9, 1865. \"The Surrender\" marked the end of the Confederacy. The CSS Stonewall sailed from Europe to break the Union blockade in March; on making Havana, Cuba, it surrendered. Some high officials escaped to Europe, but President Davis was captured May 10; all remaining Confederate land forces surrendered by June 1865. The U.S. Army took control of the Confederate areas without post-surrender insurgency or guerrilla warfare against them, but peace was subsequently marred by a great deal of local violence, feuding and revenge killings. The last confederate military unit, the commerce raider CSS Shenandoah, surrendered on November 6, 1865, in Liverpool.",
"title": "History"
},
{
"paragraph_id": 125,
"text": "Historian Gary Gallagher concluded that the Confederacy capitulated in early 1865 because northern armies crushed \"organized southern military resistance\". The Confederacy's population, soldier and civilian, had suffered material hardship and social disruption. They had expended and extracted a profusion of blood and treasure until collapse; \"the end had come\". Jefferson Davis' assessment in 1890 determined, \"With the capture of the capital, the dispersion of the civil authorities, the surrender of the armies in the field, and the arrest of the President, the Confederate States of America disappeared ... their history henceforth became a part of the history of the United States.\"",
"title": "History"
},
{
"paragraph_id": 126,
"text": "When the war ended over 14,000 Confederates petitioned President Johnson for a pardon; he was generous in giving them out. He issued a general amnesty to all Confederate participants in the \"late Civil War\" in 1868. Congress passed additional Amnesty Acts in May 1866 with restrictions on office holding, and the Amnesty Act in May 1872 lifting those restrictions. There was a great deal of discussion in 1865 about bringing treason trials, especially against Jefferson Davis. There was no consensus in President Johnson's cabinet, and no one was charged with treason. An acquittal of Davis would have been humiliating for the government.",
"title": "Legacy and assessment"
},
{
"paragraph_id": 127,
"text": "Davis was indicted for treason but never tried; he was released from prison on bail in May 1867. The amnesty of December 25, 1868, by President Johnson eliminated any possibility of Jefferson Davis (or anyone else associated with the Confederacy) standing trial for treason.",
"title": "Legacy and assessment"
},
{
"paragraph_id": 128,
"text": "Henry Wirz, the commandant of a notorious prisoner-of-war camp near Andersonville, Georgia, was tried and convicted by a military court, and executed on November 10, 1865. The charges against him involved conspiracy and cruelty, not treason.",
"title": "Legacy and assessment"
},
{
"paragraph_id": 129,
"text": "The U.S. government began a decade-long process known as Reconstruction which attempted to resolve the political and constitutional issues of the Civil War. The priorities were: to guarantee that Confederate nationalism and slavery were ended, to ratify and enforce the Thirteenth Amendment which outlawed slavery; the Fourteenth which guaranteed dual U.S. and state citizenship to all native-born residents, regardless of race; and the Fifteenth, which made it illegal to deny the right to vote because of race.",
"title": "Legacy and assessment"
},
{
"paragraph_id": 130,
"text": "By 1877, the Compromise of 1877 ended Reconstruction in the former Confederate states. Federal troops were withdrawn from the South, where conservative white Democrats had already regained political control of state governments, often through extreme violence and fraud to suppress black voting. The prewar South had many rich areas; the war left the entire region economically devastated by military action, ruined infrastructure, and exhausted resources. Still dependent on an agricultural economy and resisting investment in infrastructure, it remained dominated by the planter elite into the next century. Confederate veterans had been temporarily disenfranchised by Reconstruction policy, and Democrat-dominated legislatures passed new constitutions and amendments to now exclude most blacks and many poor whites. This exclusion and a weakened Republican Party remained the norm until the Voting Rights Act of 1965. The Solid South of the early 20th century did not achieve national levels of prosperity until long after World War II.",
"title": "Legacy and assessment"
},
{
"paragraph_id": 131,
"text": "In Texas v. White, 74 U.S. 700 (1869) the United States Supreme Court ruled—by a 5–3 majority—that Texas had remained a state ever since it first joined the Union, despite claims that it joined the Confederate States of America. In this case, the court held that the Constitution did not permit a state to unilaterally secede from the United States. Further, that the ordinances of secession, and all the acts of the legislatures within seceding states intended to give effect to such ordinances, were \"absolutely null\", under the Constitution. This case settled the law that applied to all questions regarding state legislation during the war. Furthermore, it decided one of the \"central constitutional questions\" of the Civil War: The Union is perpetual and indestructible, as a matter of constitutional law. In declaring that no state could leave the Union, \"except through revolution or through consent of the States\", it was \"explicitly repudiating the position of the Confederate states that the United States was a voluntary compact between sovereign states\".",
"title": "Legacy and assessment"
},
{
"paragraph_id": 132,
"text": "Historian Frank Lawrence Owsley argued that the Confederacy \"died of states' rights\". The central government was denied requisitioned soldiers and money by governors and state legislatures because they feared that Richmond would encroach on the rights of the states. Georgia's governor Joseph Brown warned of a secret conspiracy by Jefferson Davis to destroy states' rights and individual liberty. The first conscription act in North America, authorizing Davis to draft soldiers, was said to be the \"essence of military despotism\".",
"title": "Legacy and assessment"
},
{
"paragraph_id": 133,
"text": "Vice President Alexander H. Stephens feared losing the very form of republican government. Allowing President Davis to threaten \"arbitrary arrests\" to draft hundreds of governor-appointed \"bomb-proof\" bureaucrats conferred \"more power than the English Parliament had ever bestowed on the king. History proved the dangers of such unchecked authority.\" The abolishment of draft exemptions for newspaper editors was interpreted as an attempt by the Confederate government to muzzle presses, such as the Raleigh NC Standard, to control elections and to suppress the peace meetings there. As Rable concludes, \"For Stephens, the essence of patriotism, the heart of the Confederate cause, rested on an unyielding commitment to traditional rights\" without considerations of military necessity, pragmatism or compromise.",
"title": "Legacy and assessment"
},
{
"paragraph_id": 134,
"text": "In 1863, Governor Pendleton Murrah of Texas determined that state troops were required for defense against Plains Indians and Union forces that might attack from Kansas. He refused to send his soldiers to the East. Governor Zebulon Vance of North Carolina showed intense opposition to conscription, limiting recruitment success. Vance's faith in states' rights drove him into repeated, stubborn opposition to the Davis administration.",
"title": "Legacy and assessment"
},
{
"paragraph_id": 135,
"text": "Despite political differences within the Confederacy, no national political parties were formed because they were seen as illegitimate. \"Anti-partyism became an article of political faith.\" Without a system of political parties building alternate sets of national leaders, electoral protests tended to be narrowly state-based, \"negative, carping and petty\". The 1863 mid-term elections became mere expressions of futile and frustrated dissatisfaction. According to historian David M. Potter, the lack of a functioning two-party system caused \"real and direct damage\" to the Confederate war effort since it prevented the formulation of any effective alternatives to the conduct of the war by the Davis administration.",
"title": "Legacy and assessment"
},
{
"paragraph_id": 136,
"text": "The enemies of President Davis proposed that the Confederacy \"died of Davis\". He was unfavorably compared to George Washington by critics such as Edward Alfred Pollard, editor of the most influential newspaper in the Confederacy, the Richmond (Virginia) Examiner. E. Merton Coulter summarizes, \"The American Revolution had its Washington; the Southern Revolution had its Davis ... one succeeded and the other failed.\" Beyond the early honeymoon period, Davis was never popular. He unwittingly caused much internal dissension from early on. His ill health and temporary bouts of blindness disabled him for days at a time.",
"title": "Legacy and assessment"
},
{
"paragraph_id": 137,
"text": "Coulter, viewed by today's historians as a Confederate apologist, says Davis was heroic and his will was indomitable. But his \"tenacity, determination, and will power\" stirred up lasting opposition from enemies that Davis could not shake. He failed to overcome \"petty leaders of the states\" who made the term \"Confederacy\" into a label for tyranny and oppression, preventing the \"Stars and Bars\" from becoming a symbol of larger patriotic service and sacrifice. Instead of campaigning to develop nationalism and gain support for his administration, he rarely courted public opinion, assuming an aloofness, \"almost like an Adams\".",
"title": "Legacy and assessment"
},
{
"paragraph_id": 138,
"text": "Escott argues that Davis was unable to mobilize Confederate nationalism in support of his government effectively, and especially failed to appeal to the small farmers who comprised the bulk of the population. In addition to the problems caused by states' rights, Escott also emphasizes that the widespread opposition to any strong central government combined with the vast difference in wealth between the slave-owning class and the small farmers created insolvable dilemmas when the Confederate survival presupposed a strong central government backed by a united populace. The prewar claim that white solidarity was necessary to provide a unified Southern voice in Washington no longer held. Davis failed to build a network of supporters who would speak up when he came under criticism, and he repeatedly alienated governors and other state-based leaders by demanding centralized control of the war effort.",
"title": "Legacy and assessment"
},
{
"paragraph_id": 139,
"text": "According to Coulter, Davis was not an efficient administrator as he attended to too many details, protected his friends after their failures were obvious, and spent too much time on military affairs versus his civic responsibilities. Coulter concludes he was not the ideal leader for the Southern Revolution, but he showed \"fewer weaknesses than any other\" contemporary character available for the role.",
"title": "Legacy and assessment"
},
{
"paragraph_id": 140,
"text": "Robert E. Lee's assessment of Davis as president was, \"I knew of none that could have done as well.\"",
"title": "Legacy and assessment"
},
{
"paragraph_id": 141,
"text": "The Southern leaders met in Montgomery, Alabama, to write their constitution. Much of the Confederate States Constitution replicated the United States Constitution verbatim, but it contained several explicit protections of the institution of slavery including provisions for the recognition and protection of slavery in any territory of the Confederacy. It maintained the ban on international slave-trading, though it made the ban's application explicit to \"Negroes of the African race\" in contrast to the U.S. Constitution's reference to \"such Persons as any of the States now existing shall think proper to admit\". It protected the existing internal trade of slaves among slaveholding states.",
"title": "Government and politics"
},
{
"paragraph_id": 142,
"text": "In certain areas, the Confederate Constitution gave greater powers to the states (or curtailed the powers of the central government more) than the U.S. Constitution of the time did, but in other areas, the states lost rights they had under the U.S. Constitution. Although the Confederate Constitution, like the U.S. Constitution, contained a commerce clause, the Confederate version prohibited the central government from using revenues collected in one state for funding internal improvements in another state. The Confederate Constitution's equivalent to the U.S. Constitution's general welfare clause prohibited protective tariffs (but allowed tariffs for providing domestic revenue), and spoke of \"carry[ing] on the Government of the Confederate States\" rather than providing for the \"general welfare\". State legislatures had the power to impeach officials of the Confederate government in some cases. On the other hand, the Confederate Constitution contained a Necessary and Proper Clause and a Supremacy Clause that essentially duplicated the respective clauses of the U.S. Constitution. The Confederate Constitution also incorporated each of the 12 amendments to the U.S. Constitution that had been ratified up to that point.",
"title": "Government and politics"
},
{
"paragraph_id": 143,
"text": "The Confederate Constitution did not specifically include a provision allowing states to secede; the Preamble spoke of each state \"acting in its sovereign and independent character\" but also of the formation of a \"permanent federal government\". During the debates on drafting the Confederate Constitution, one proposal would have allowed states to secede from the Confederacy. The proposal was tabled with only the South Carolina delegates voting in favor of considering the motion. The Confederate Constitution also explicitly denied States the power to bar slaveholders from other parts of the Confederacy from bringing their slaves into any state of the Confederacy or to interfere with the property rights of slave owners traveling between different parts of the Confederacy. In contrast with the secular language of the United States Constitution, the Confederate Constitution overtly asked God's blessing (\"... invoking the favor and guidance of Almighty God ...\").",
"title": "Government and politics"
},
{
"paragraph_id": 144,
"text": "Some historians have referred to the Confederacy as a form of Herrenvolk democracy.",
"title": "Government and politics"
},
{
"paragraph_id": 145,
"text": "The Montgomery Convention to establish the Confederacy and its executive met on February 4, 1861. Each state as a sovereignty had one vote, with the same delegation size as it held in the U.S. Congress, and generally 41 to 50 members attended. Offices were \"provisional\", limited to a term not to exceed one year. One name was placed in nomination for president, one for vice president. Both were elected unanimously, 6–0.",
"title": "Government and politics"
},
{
"paragraph_id": 146,
"text": "Jefferson Davis was elected provisional president. His U.S. Senate resignation speech greatly impressed with its clear rationale for secession and his pleading for a peaceful departure from the Union to independence. Although he had made it known that he wanted to be commander-in-chief of the Confederate armies, when elected, he assumed the office of Provisional President. Three candidates for provisional Vice President were under consideration the night before the February 9 election. All were from Georgia, and the various delegations meeting in different places determined two would not do, so Alexander H. Stephens was elected unanimously provisional Vice President, though with some privately held reservations. Stephens was inaugurated February 11, Davis February 18.",
"title": "Government and politics"
},
{
"paragraph_id": 147,
"text": "Davis and Stephens were elected president and vice president, unopposed on November 6, 1861. They were inaugurated on February 22, 1862.",
"title": "Government and politics"
},
{
"paragraph_id": 148,
"text": "Coulter stated, \"No president of the U.S. ever had a more difficult task.\" Washington was inaugurated in peacetime. Lincoln inherited an established government of long standing. The creation of the Confederacy was accomplished by men who saw themselves as fundamentally conservative. Although they referred to their \"Revolution\", it was in their eyes more a counter-revolution against changes away from their understanding of U.S. founding documents. In Davis' inauguration speech, he explained the Confederacy was not a French-like revolution, but a transfer of rule. The Montgomery Convention had assumed all the laws of the United States until superseded by the Confederate Congress.",
"title": "Government and politics"
},
{
"paragraph_id": 149,
"text": "The Permanent Constitution provided for a President of the Confederate States of America, elected to serve a six-year term but without the possibility of re-election. Unlike the United States Constitution, the Confederate Constitution gave the president the ability to subject a bill to a line item veto, a power also held by some state governors.",
"title": "Government and politics"
},
{
"paragraph_id": 150,
"text": "The Confederate Congress could overturn either the general or the line item vetoes with the same two-thirds votes required in the U.S. Congress. In addition, appropriations not specifically requested by the executive branch required passage by a two-thirds vote in both houses of Congress. The only person to serve as president was Jefferson Davis, as the Confederacy was defeated before the completion of his term.",
"title": "Government and politics"
},
{
"paragraph_id": 151,
"text": "The only two \"formal, national, functioning, civilian administrative bodies\" in the Civil War South were the Jefferson Davis administration and the Confederate Congresses. The Confederacy was begun by the Provisional Congress in Convention at Montgomery, Alabama on February 28, 1861. The Provisional Confederate Congress was a unicameral assembly; each state received one vote.",
"title": "Government and politics"
},
{
"paragraph_id": 152,
"text": "The Permanent Confederate Congress was elected and began its first session February 18, 1862. The Permanent Congress for the Confederacy followed the United States forms with a bicameral legislature. The Senate had two per state, twenty-six Senators. The House numbered 106 representatives apportioned by free and slave populations within each state. Two Congresses sat in six sessions until March 18, 1865.",
"title": "Government and politics"
},
{
"paragraph_id": 153,
"text": "The political influences of the civilian, soldier vote and appointed representatives reflected divisions of political geography of a diverse South. These in turn changed over time relative to Union occupation and disruption, the war impact on the local economy, and the course of the war. Without political parties, key candidate identification related to adopting secession before or after Lincoln's call for volunteers to retake Federal property. Previous party affiliation played a part in voter selection, predominantly secessionist Democrat or unionist Whig.",
"title": "Government and politics"
},
{
"paragraph_id": 154,
"text": "The absence of political parties made individual roll call voting all the more important, as the Confederate \"freedom of roll-call voting [was] unprecedented in American legislative history.\" Key issues throughout the life of the Confederacy related to (1) suspension of habeas corpus, (2) military concerns such as control of state militia, conscription and exemption, (3) economic and fiscal policy including impressment of slaves, goods and scorched earth, and (4) support of the Jefferson Davis administration in its foreign affairs and negotiating peace.",
"title": "Government and politics"
},
{
"paragraph_id": 155,
"text": "The Confederate Constitution outlined a judicial branch of the government, but the ongoing war and resistance from states-rights advocates, particularly on the question of whether it would have appellate jurisdiction over the state courts, prevented the creation or seating of the \"Supreme Court of the Confederate States\". Thus, the state courts generally continued to operate as they had done, simply recognizing the Confederate States as the national government.",
"title": "Government and politics"
},
{
"paragraph_id": 156,
"text": "Confederate district courts were authorized by Article III, Section 1, of the Confederate Constitution, and President Davis appointed judges within the individual states of the Confederate States of America. In many cases, the same US Federal District Judges were appointed as Confederate States District Judges. Confederate district courts began reopening in early 1861, handling many of the same type cases as had been done before. Prize cases, in which Union ships were captured by the Confederate Navy or raiders and sold through court proceedings, were heard until the blockade of southern ports made this impossible. After a Sequestration Act was passed by the Confederate Congress, the Confederate district courts heard many cases in which enemy aliens (typically Northern absentee landlords owning property in the South) had their property sequestered (seized) by Confederate Receivers.",
"title": "Government and politics"
},
{
"paragraph_id": 157,
"text": "When the matter came before the Confederate court, the property owner could not appear because he was unable to travel across the front lines between Union and Confederate forces. Thus, the District Attorney won the case by default, the property was typically sold, and the money used to further the Southern war effort. Eventually, because there was no Confederate Supreme Court, sharp attorneys like South Carolina's Edward McCrady began filing appeals. This prevented their clients' property from being sold until a supreme court could be constituted to hear the appeal, which never occurred. Where Federal troops gained control over parts of the Confederacy and re-established civilian government, US district courts sometimes resumed jurisdiction.",
"title": "Government and politics"
},
{
"paragraph_id": 158,
"text": "Supreme Court – not established.",
"title": "Government and politics"
},
{
"paragraph_id": 159,
"text": "District Courts – judges",
"title": "Government and politics"
},
{
"paragraph_id": 160,
"text": "When the Confederacy was formed and its seceding states broke from the Union, it was at once confronted with the arduous task of providing its citizens with a mail delivery system, and, amid the American Civil War, the newly formed Confederacy created and established the Confederate Post Office. One of the first undertakings in establishing the Post Office was the appointment of John H. Reagan to the position of Postmaster General, by Jefferson Davis in 1861. This made him the first Postmaster General of the Confederate Post Office, and a member of Davis's presidential cabinet. Writing in 1906, historian Walter Flavius McCaleb praised Reagan's \"energy and intelligence... in a degree scarcely matched by any of his associates\".",
"title": "Government and politics"
},
{
"paragraph_id": 161,
"text": "When the war began, the US Post Office briefly delivered mail from the secessionist states. Mail that was postmarked after the date of a state's admission into the Confederacy through May 31, 1861, and bearing US postage was still delivered. After this time, private express companies still managed to carry some of the mail across enemy lines. Later, mail that crossed lines had to be sent by 'Flag of Truce' and was allowed to pass at only two specific points. Mail sent from the Confederacy to the U.S. was received, opened and inspected at Fortress Monroe on the Virginia coast before being passed on into the U.S. mail stream. Mail sent from the North to the South passed at City Point, also in Virginia, where it was also inspected before being sent on.",
"title": "Government and politics"
},
{
"paragraph_id": 162,
"text": "With the chaos of the war, a working postal system was more important than ever for the Confederacy. The Civil War had divided family members and friends and consequently letter writing increased dramatically across the entire divided nation, especially to and from the men who were away serving in an army. Mail delivery was also important for the Confederacy for a myriad of business and military reasons. Because of the Union blockade, basic supplies were always in demand and so getting mailed correspondence out of the country to suppliers was imperative to the successful operation of the Confederacy. Volumes of material have been written about the Blockade runners who evaded Union ships on blockade patrol, usually at night, and who moved cargo and mail in and out of the Confederate States throughout the course of the war. Of particular interest to students and historians of the American Civil War is Prisoner of War mail and Blockade mail as these items were often involved with a variety of military and other war time activities. The postal history of the Confederacy along with surviving Confederate mail has helped historians document the various people, places and events that were involved in the American Civil War as it unfolded.",
"title": "Government and politics"
},
{
"paragraph_id": 163,
"text": "The Confederacy actively used the army to arrest people suspected of loyalty to the United States. Historian Mark Neely found 4,108 names of men arrested and estimated a much larger total. The Confederacy arrested pro-Union civilians in the South at about the same rate as the Union arrested pro-Confederate civilians in the North. Neely argues:",
"title": "Government and politics"
},
{
"paragraph_id": 164,
"text": "The Confederate citizen was not any freer than the Union citizen – and perhaps no less likely to be arrested by military authorities. In fact, the Confederate citizen may have been in some ways less free than his Northern counterpart. For example, freedom to travel within the Confederate states was severely limited by a domestic passport system.",
"title": "Government and politics"
},
{
"paragraph_id": 165,
"text": "Across the South, widespread rumors alarmed the whites by predicting the slaves were planning some sort of insurrection. Patrols were stepped up. The slaves did become increasingly independent, and resistant to punishment, but historians agree there were no insurrections. In the invaded areas, insubordination was more the norm than was loyalty to the old master; Bell Wiley says, \"It was not disloyalty, but the lure of freedom.\" Many slaves became spies for the North, and large numbers ran away to federal lines.",
"title": "Economy"
},
{
"paragraph_id": 166,
"text": "Lincoln's Emancipation Proclamation, an executive order of the U.S. government on January 1, 1863, changed the legal status of three million slaves in designated areas of the Confederacy from \"slave\" to \"free\". The long-term effect was that the Confederacy could not preserve the institution of slavery and lost the use of the core element of its plantation labor force. Slaves were legally freed by the Proclamation, and became free by escaping to federal lines, or by advances of federal troops. Over 200,000 freed slaves were hired by the federal army as teamsters, cooks, launderers and laborers, and eventually as soldiers. Plantation owners, realizing that emancipation would destroy their economic system, sometimes moved their slaves as far as possible out of reach of the Union army. Though the concept was promoted within certain circles of the Union hierarchy during and immediately following the war, no program of reparations for freed slaves was ever attempted. Unlike other Western countries, such as Britain and France, the U.S. government never paid compensation to Southern slave owners for their \"lost property\".",
"title": "Economy"
},
{
"paragraph_id": 167,
"text": "Most whites were subsistence farmers who traded their surpluses locally. The plantations of the South, with white ownership and an enslaved labor force, produced substantial wealth from cash crops. It supplied two-thirds of the world's cotton, which was in high demand for textiles, along with tobacco, sugar, and naval stores (such as turpentine). These raw materials were exported to factories in Europe and the Northeast. Planters reinvested their profits in more slaves and fresh land, as cotton and tobacco depleted the soil. There was little manufacturing or mining; shipping was controlled by non-southerners.",
"title": "Economy"
},
{
"paragraph_id": 168,
"text": "The plantations that enslaved over three million black people were the principal source of wealth. Most were concentrated in \"black belt\" plantation areas (because few white families in the poor regions owned slaves). For decades, there had been widespread fear of slave revolts. During the war, extra men were assigned to \"home guard\" patrol duty and governors sought to keep militia units at home for protection. Historian William Barney reports, \"no major slave revolts erupted during the Civil War.\" Nevertheless, slaves took the opportunity to enlarge their sphere of independence, and when union forces were nearby, many ran off to join them.",
"title": "Economy"
},
{
"paragraph_id": 169,
"text": "Slave labor was applied in industry in a limited way in the Upper South and in a few port cities. One reason for the regional lag in industrial development was top-heavy income distribution. Mass production requires mass markets, and slaves living in small cabins, using self-made tools and outfitted with one suit of work clothes each year of inferior fabric, did not generate consumer demand to sustain local manufactures of any description in the same way as did a mechanized family farm of free labor in the North. The Southern economy was \"pre-capitalist\" in that slaves were put to work in the largest revenue-producing enterprises, not free labor markets. That labor system as practiced in the American South encompassed paternalism, whether abusive or indulgent, and that meant labor management considerations apart from productivity.",
"title": "Economy"
},
{
"paragraph_id": 170,
"text": "Approximately 85% of both the North and South white populations lived on family farms, both regions were predominantly agricultural, and mid-century industry in both was mostly domestic. But the Southern economy was pre-capitalist in its overwhelming reliance on the agriculture of cash crops to produce wealth, while the great majority of farmers fed themselves and supplied a small local market. Southern cities and industries grew faster than ever before, but the thrust of the rest of the country's exponential growth elsewhere was toward urban industrial development along transportation systems of canals and railroads. The South was following the dominant currents of the American economic mainstream, but at a \"great distance\" as it lagged in the all-weather modes of transportation that brought cheaper, speedier freight shipment and forged new, expanding inter-regional markets.",
"title": "Economy"
},
{
"paragraph_id": 171,
"text": "A third count of the pre-capitalist Southern economy relates to the cultural setting. The South and southerners did not adopt a work ethic, nor the habits of thrift that marked the rest of the country. It had access to the tools of capitalism, but it did not adopt its culture. The Southern Cause as a national economy in the Confederacy was grounded in \"slavery and race, planters and patricians, plain folk and folk culture, cotton and plantations\".",
"title": "Economy"
},
{
"paragraph_id": 172,
"text": "The Confederacy started its existence as an agrarian economy with exports, to a world market, of cotton, and, to a lesser extent, tobacco and sugarcane. Local food production included grains, hogs, cattle, and gardens. The cash came from exports but the Southern people spontaneously stopped exports in early 1861 to hasten the impact of \"King Cotton\", a failed strategy to coerce international support for the Confederacy through its cotton exports. When the blockade was announced, commercial shipping practically ended (the ships could not get insurance), and only a trickle of supplies came via blockade runners. The cutoff of exports was an economic disaster for the South, rendering useless its most valuable properties, its plantations and their enslaved workers. Many planters kept growing cotton, which piled up everywhere, but most turned to food production. All across the region, the lack of repair and maintenance wasted away the physical assets.",
"title": "Economy"
},
{
"paragraph_id": 173,
"text": "The eleven states had produced $155 million (~$4.14 billion in 2022) in manufactured goods in 1860, chiefly from local gristmills, and lumber, processed tobacco, cotton goods and naval stores such as turpentine. The main industrial areas were border cities such as Baltimore, Wheeling, Louisville and St. Louis, that were never under Confederate control. The government did set up munitions factories in the Deep South. Combined with captured munitions and those coming via blockade runners, the armies were kept minimally supplied with weapons. The soldiers suffered from reduced rations, lack of medicines, and the growing shortages of uniforms, shoes and boots. Shortages were much worse for civilians, and the prices of necessities steadily rose.",
"title": "Economy"
},
{
"paragraph_id": 174,
"text": "The Confederacy adopted a tariff or tax on imports of 15%, and imposed it on all imports from other countries, including the United States. The tariff mattered little; the Union blockade minimized commercial traffic through the Confederacy's ports, and very few people paid taxes on goods smuggled from the North. The Confederate government in its entire history collected only $3.5 million in tariff revenue. The lack of adequate financial resources led the Confederacy to finance the war through printing money, which led to high inflation. The Confederacy underwent an economic revolution by centralization and standardization, but it was too little too late as its economy was systematically strangled by blockade and raids.",
"title": "Economy"
},
{
"paragraph_id": 175,
"text": "In peacetime, the South's extensive and connected systems of navigable rivers and coastal access allowed for cheap and easy transportation of agricultural products. The railroad system in the South had developed as a supplement to the navigable rivers to enhance the all-weather shipment of cash crops to market. Railroads tied plantation areas to the nearest river or seaport and so made supply more dependable, lowered costs and increased profits. In the event of invasion, the vast geography of the Confederacy made logistics difficult for the Union. Wherever Union armies invaded, they assigned many of their soldiers to garrison captured areas and to protect rail lines.",
"title": "Economy"
},
{
"paragraph_id": 176,
"text": "At the onset of the Civil War the South had a rail network disjointed and plagued by changes in track gauge as well as lack of interchange. Locomotives and freight cars had fixed axles and could not use tracks of different gauges (widths). Railroads of different gauges leading to the same city required all freight to be off-loaded onto wagons for transport to the connecting railroad station, where it had to await freight cars and a locomotive before proceeding. Centers requiring off-loading included Vicksburg, New Orleans, Montgomery, Wilmington and Richmond. In addition, most rail lines led from coastal or river ports to inland cities, with few lateral railroads. Because of this design limitation, the relatively primitive railroads of the Confederacy were unable to overcome the Union naval blockade of the South's crucial intra-coastal and river routes.",
"title": "Economy"
},
{
"paragraph_id": 177,
"text": "The Confederacy had no plan to expand, protect or encourage its railroads. Southerners' refusal to export the cotton crop in 1861 left railroads bereft of their main source of income. Many lines had to lay off employees; many critical skilled technicians and engineers were permanently lost to military service. In the early years of the war the Confederate government had a hands-off approach to the railroads. Only in mid-1863 did the Confederate government initiate a national policy, and it was confined solely to aiding the war effort. Railroads came under the de facto control of the military. In contrast, the U.S. Congress had authorized military administration of Union-controlled railroad and telegraph systems in January 1862, imposed a standard gauge, and built railroads into the South using that gauge. Confederate armies successfully reoccupying territory could not be resupplied directly by rail as they advanced. The C.S. Congress formally authorized military administration of railroads in February 1865.",
"title": "Economy"
},
{
"paragraph_id": 178,
"text": "In the last year before the end of the war, the Confederate railroad system stood permanently on the verge of collapse. There was no new equipment and raids on both sides systematically destroyed key bridges, as well as locomotives and freight cars. Spare parts were cannibalized; feeder lines were torn up to get replacement rails for trunk lines, and rolling stock wore out through heavy use.",
"title": "Economy"
},
{
"paragraph_id": 179,
"text": "The Confederate army experienced a persistent shortage of horses and mules and requisitioned them with dubious promissory notes given to local farmers and breeders. Union forces paid in real money and found ready sellers in the South. Both armies needed horses for cavalry and for artillery. Mules pulled the wagons. The supply was undermined by an unprecedented epidemic of glanders, a fatal disease that baffled veterinarians. After 1863 the invading Union forces had a policy of shooting all the local horses and mules that they did not need, in order to keep them out of Confederate hands. The Confederate armies and farmers experienced a growing shortage of horses and mules, which hurt the Southern economy and the war effort. The South lost half of its 2.5 million horses and mules; many farmers ended the war with none left. Army horses were used up by hard work, malnourishment, disease and battle wounds; they had a life expectancy of about seven months.",
"title": "Economy"
},
{
"paragraph_id": 180,
"text": "Both the individual Confederate states and later the Confederate government printed Confederate States of America dollars as paper currency in various denominations, with a total face value of $1.5 billion. Much of it was signed by Treasurer Edward C. Elmore. Inflation became rampant as the paper money depreciated and eventually became worthless. The state governments and some localities printed their own paper money, adding to the runaway inflation. Many bills still exist, although in recent years counterfeit copies have proliferated.",
"title": "Economy"
},
{
"paragraph_id": 181,
"text": "The Confederate government initially wanted to finance its war mostly through tariffs on imports, export taxes, and voluntary donations of gold. After the spontaneous imposition of an embargo on cotton sales to Europe in 1861, these sources of revenue dried up and the Confederacy increasingly turned to issuing debt and printing money to pay for war expenses. The Confederate States politicians were worried about angering the general population with hard taxes. A tax increase might disillusion many Southerners, so the Confederacy resorted to printing more money. As a result, inflation increased and remained a problem for the southern states throughout the rest of the war. By April 1863, for example, the cost of flour in Richmond had risen to $100 (~$2,377 in 2022) a barrel and housewives were rioting.",
"title": "Economy"
},
{
"paragraph_id": 182,
"text": "The Confederate government took over the three national mints in its territory: the Charlotte Mint in North Carolina, the Dahlonega Mint in Georgia, and the New Orleans Mint in Louisiana. During 1861 all of these facilities produced small amounts of gold coinage, and the latter half dollars as well. Since the mints used the current dies on hand, all appear to be U.S. issues. However, by comparing slight differences in the dies specialists can distinguish 1861-O half dollars that were minted either under the authority of the U.S. government, the State of Louisiana, or finally the Confederate States. Unlike the gold coins, this issue was produced in significant numbers (over 2.5 million) and is inexpensive in lower grades, although fakes have been made for sale to the public. However, before the New Orleans Mint ceased operation in May 1861, the Confederate government used its own reverse design to strike four half dollars. This made one of the great rarities of American numismatics. A lack of silver and gold precluded further coinage. The Confederacy apparently also experimented with issuing one cent coins, although only 12 were produced by a jeweler in Philadelphia, who was afraid to send them to the South. Like the half dollars, copies were later made as souvenirs.",
"title": "Economy"
},
{
"paragraph_id": 183,
"text": "US coinage was hoarded and did not have any general circulation. U.S. coinage was admitted as legal tender up to $10, as were British sovereigns, French Napoleons and Spanish and Mexican doubloons at a fixed rate of exchange. Confederate money was paper and postage stamps.",
"title": "Economy"
},
{
"paragraph_id": 184,
"text": "By mid-1861, the Union naval blockade virtually shut down the export of cotton and the import of manufactured goods. Food that formerly came overland was cut off.",
"title": "Economy"
},
{
"paragraph_id": 185,
"text": "As women were the ones who remained at home, they had to make do with the lack of food and supplies. They cut back on purchases, used old materials, and planted more flax and peas to provide clothing and food. They used ersatz substitutes when possible, but there was no real coffee, only okra and chicory substitutes. The households were severely hurt by inflation in the cost of everyday items like flour, and the shortages of food, fodder for the animals, and medical supplies for the wounded.",
"title": "Economy"
},
{
"paragraph_id": 186,
"text": "State governments requested that planters grow less cotton and more food, but most refused. When cotton prices soared in Europe, expectations were that Europe would soon intervene to break the blockade and make them rich, but Europe remained neutral. The Georgia legislature imposed cotton quotas, making it a crime to grow an excess. But food shortages only worsened, especially in the towns.",
"title": "Economy"
},
{
"paragraph_id": 187,
"text": "The overall decline in food supplies, made worse by the inadequate transportation system, led to serious shortages and high prices in urban areas. When bacon reached a dollar a pound in 1863, the poor women of Richmond, Atlanta and many other cities began to riot; they broke into shops and warehouses to seize food, as they were angry at ineffective state relief efforts, speculators, and merchants. As wives and widows of soldiers, they were hurt by the inadequate welfare system.",
"title": "Economy"
},
{
"paragraph_id": 188,
"text": "By the end of the war deterioration of the Southern infrastructure was widespread. The number of civilian deaths is unknown. Every Confederate state was affected, but most of the war was fought in Virginia and Tennessee, while Texas and Florida saw the least military action. Much of the damage was caused by direct military action, but most was caused by lack of repairs and upkeep, and by deliberately using up resources. Historians have recently estimated how much of the devastation was caused by military action. Paul Paskoff calculates that Union military operations were conducted in 56% of 645 counties in nine Confederate states (excluding Texas and Florida). These counties contained 63% of the 1860 white population and 64% of the slaves. By the time the fighting took place, undoubtedly some people had fled to safer areas, so the exact population exposed to war is unknown.",
"title": "Economy"
},
{
"paragraph_id": 189,
"text": "The eleven Confederate States in the 1860 United States Census had 297 towns and cities with 835,000 people; of these 162 with 681,000 people were at one point occupied by Union forces. Eleven were destroyed or severely damaged by war action, including Atlanta (with an 1860 population of 9,600), Charleston, Columbia, and Richmond (with prewar populations of 40,500, 8,100, and 37,900, respectively); the eleven contained 115,900 people in the 1860 census, or 14% of the urban South. Historians have not estimated what their actual population was when Union forces arrived. The number of people (as of 1860) who lived in the destroyed towns represented just over 1% of the Confederacy's 1860 population. In addition, 45 court houses were burned (out of 830). The South's agriculture was not highly mechanized. The value of farm implements and machinery in the 1860 Census was $81 million; by 1870, there was 40% less, worth just $48 million. Many old tools had broken through heavy use; new tools were rarely available; even repairs were difficult.",
"title": "Economy"
},
{
"paragraph_id": 190,
"text": "The economic losses affected everyone. Banks and insurance companies were mostly bankrupt. Confederate currency and bonds were worthless. The billions of dollars invested in slaves vanished. Most debts were also left behind. Most farms were intact, but most had lost their horses, mules and cattle; fences and barns were in disrepair. Paskoff shows the loss of farm infrastructure was about the same whether or not fighting took place nearby. The loss of infrastructure and productive capacity meant that rural widows throughout the region faced not only the absence of able-bodied men, but a depleted stock of material resources that they could manage and operate themselves. During four years of warfare, disruption, and blockades, the South used up about half its capital stock. The North, by contrast, absorbed its material losses so effortlessly that it appeared richer at the end of the war than at the beginning.",
"title": "Economy"
},
{
"paragraph_id": 191,
"text": "The rebuilding took years and was hindered by the low price of cotton after the war. Outside investment was essential, especially in railroads. One historian has summarized the collapse of the transportation infrastructure needed for economic recovery:",
"title": "Economy"
},
{
"paragraph_id": 192,
"text": "One of the greatest calamities which confronted Southerners was the havoc wrought on the transportation system. Roads were impassable or nonexistent, and bridges were destroyed or washed away. The important river traffic was at a standstill: levees were broken, channels were blocked, the few steamboats which had not been captured or destroyed were in a state of disrepair, wharves had decayed or were missing, and trained personnel were dead or dispersed. Horses, mules, oxen, carriages, wagons, and carts had nearly all fallen prey at one time or another to the contending armies. The railroads were paralyzed, with most of the companies bankrupt. These lines had been the special target of the enemy. On one stretch of 114 miles in Alabama, every bridge and trestle was destroyed, cross-ties rotten, buildings burned, water-tanks gone, ditches filled up, and tracks grown up in weeds and bushes ... Communication centers like Columbia and Atlanta were in ruins; shops and foundries were wrecked or in disrepair. Even those areas bypassed by battle had been pirated for equipment needed on the battlefront, and the wear and tear of wartime usage without adequate repairs or replacements reduced all to a state of disintegration.",
"title": "Economy"
},
{
"paragraph_id": 193,
"text": "More than 250,000 Confederate soldiers died during the war. Some widows abandoned their family farms and merged into the households of relatives, or even became refugees living in camps with high rates of disease and death. In the Old South, being an \"old maid\" was an embarrassment to the woman and her family, but after the war, it became almost a norm. Some women welcomed the freedom of not having to marry. Divorce, while never fully accepted, became more common. The concept of the \"New Woman\" emerged – she was self-sufficient and independent, and stood in sharp contrast to the \"Southern Belle\" of antebellum lore.",
"title": "Economy"
},
{
"paragraph_id": 194,
"text": "The first official flag of the Confederate States of America—called the \"Stars and Bars\"—originally had seven stars, representing the first seven states that initially formed the Confederacy. As more states joined, more stars were added, until the total was 13 (two stars were added for the divided states of Kentucky and Missouri). During the First Battle of Bull Run, (First Manassas) it sometimes proved difficult to distinguish the Stars and Bars from the Union flag. To rectify the situation, a separate \"Battle Flag\" was designed for use by troops in the field. Also known as the \"Southern Cross\", many variations sprang from the original square configuration.",
"title": "National flags"
},
{
"paragraph_id": 195,
"text": "Although it was never officially adopted by the Confederate government, the popularity of the Southern Cross among both soldiers and the civilian population was a primary reason why it was made the main color feature when a new national flag was adopted in 1863. This new standard—known as the \"Stainless Banner\"—consisted of a lengthened white field area with a Battle Flag canton. This flag too had its problems when used in military operations as, on a windless day, it could easily be mistaken for a flag of truce or surrender. Thus, in 1865, a modified version of the Stainless Banner was adopted. This final national flag of the Confederacy kept the Battle Flag canton, but shortened the white field and added a vertical red bar to the fly end.",
"title": "National flags"
},
{
"paragraph_id": 196,
"text": "Because of its depiction in the 20th-century and popular media, many people consider the rectangular battle flag with the dark blue bars as being synonymous with \"the Confederate Flag\", but this flag was never adopted as a Confederate national flag.",
"title": "National flags"
},
{
"paragraph_id": 197,
"text": "The \"Confederate Flag\" has a color scheme similar to that of the most common Battle Flag design, but is rectangular, not square. The \"Confederate Flag\" is a highly recognizable symbol of the South in the United States today and continues to be a controversial icon.",
"title": "National flags"
},
{
"paragraph_id": 198,
"text": "Unionism—opposition to the Confederacy—was strong in certain areas within the Confederate States. Southern Unionists (white Southerners who were opposed to the Confederacy) were widespread in the mountain regions of Appalachia and the Ozarks. Unionists, led by Parson Brownlow and Senator Andrew Johnson, took control of East Tennessee in 1863. Unionists also attempted control over western Virginia, but never effectively held more than half of the counties that formed the new state of West Virginia. Union forces captured parts of coastal North Carolina, and at first were largely welcomed by local unionists. That view would change for some, as the occupiers became perceived as oppressive, callous, radical and favorable to Freedmen. Occupiers pillaged, freed slaves, and evicted those who refused to swear loyalty oaths to the Union.",
"title": "Southern Unionism"
},
{
"paragraph_id": 199,
"text": "Support for the Confederacy was also low in parts of Texas, where Unionism persisted in certain areas. Claude Elliott estimates that only a third of the population actively supported the Confederacy. Many Unionists supported the Confederacy after the war began, but many others clung to their Unionism throughout the war, especially in the northern counties, German districts in the Texas Hill Country, and majority Mexican areas. According to Ernest Wallace: \"This account of a dissatisfied Unionist minority, although historically essential, must be kept in its proper perspective, for throughout the war the overwhelming majority of the people zealously supported the Confederacy ...\" Randolph B. Campbell states, \"In spite of terrible losses and hardships, most Texans continued throughout the war to support the Confederacy as they had supported secession\". Dale Baum in his analysis of Texas politics in the era counters: \"This idea of a Confederate Texas united politically against northern adversaries was shaped more by nostalgic fantasies than by wartime realities.\" He characterizes Texas Civil War history as \"a morose story of intragovernmental rivalries coupled with wide-ranging disaffection that prevented effective implementation of state wartime policies\".",
"title": "Southern Unionism"
},
{
"paragraph_id": 200,
"text": "In Texas, local officials harassed and murdered Unionists and Germans during the Civil War. In Cooke County, Texas, 150 suspected Unionists were arrested; 25 were lynched without trial and 40 more were hanged after a summary trial. Draft resistance was widespread especially among Texans of German or Mexican descent; many of the latter leaving to Mexico. Confederate officials would attempt to hunt down and kill potential draftees who had gone into hiding.",
"title": "Southern Unionism"
},
{
"paragraph_id": 201,
"text": "Civil liberties were of small concern in both the North and South. Lincoln and Davis both took a hard line against dissent. Neely explores how the Confederacy became a virtual police state with guards and patrols all about, and a domestic passport system whereby everyone needed official permission each time they wanted to travel. Over 4,000 suspected Unionists were imprisoned in the Confederate States without trial.",
"title": "Southern Unionism"
},
{
"paragraph_id": 202,
"text": "Southerner Unionists were also known as Union Loyalists or Lincoln's Loyalists. Within the eleven Confederate states, states such as Tennessee (especially East Tennessee), Virginia (which included West Virginia at the time), and North Carolina had the largest populations of Unionists. Many areas of Southern Appalachia harbored pro-Union sentiment. Up to 100,000 men living in states under Confederate control served in the Union Army or pro-Union guerilla groups. Although Southern Unionists came from all classes, most differed socially, culturally, and economically from the regions dominant pre-war planter class.",
"title": "Southern Unionism"
},
{
"paragraph_id": 203,
"text": "The Confederate States of America claimed a total of 2,919 miles (4,698 km) of coastline, thus a large part of its territory lay on the seacoast with level and often sandy or marshy ground. Most of the interior portion consisted of arable farmland, though much was also hilly and mountainous, and the far western territories were deserts. The southern reaches of the Mississippi River bisected the country, and the western half was often referred to as the Trans-Mississippi. The highest point (excluding Arizona and New Mexico) was Guadalupe Peak in Texas at 8,750 feet (2,670 m).",
"title": "Geography"
},
{
"paragraph_id": 204,
"text": "Much of the area claimed by the Confederate States of America had a humid subtropical climate with mild winters and long, hot, humid summers. The climate and terrain varied from vast swamps (such as those in Florida and Louisiana) to semi-arid steppes and arid deserts west of longitude 100 degrees west. The subtropical climate made winters mild but allowed infectious diseases to flourish. Consequently, on both sides more soldiers died from disease than were killed in combat, a fact hardly atypical of pre-World War I conflicts.",
"title": "Geography"
},
{
"paragraph_id": 205,
"text": "The United States Census of 1860 gives a picture of the overall 1860 population for the areas that had joined the Confederacy. The population numbers exclude non-assimilated Indian tribes.",
"title": "Demographics"
},
{
"paragraph_id": 206,
"text": "In 1860, the areas that later formed the eleven Confederate states (and including the future West Virginia) had 132,760 (2%) free blacks. Males made up 49% of the total population and females 51% (whites: 49% male, 51% female; slaves: 50% male, 50% female; free blacks: 47% male, 53% female).",
"title": "Demographics"
},
{
"paragraph_id": 207,
"text": "The CSA was overwhelmingly rural. Few towns had populations of more than 1,000—the typical county seat had a population of fewer than 500. Cities were rare; of the twenty largest U.S. cities in the 1860 census, only New Orleans lay in Confederate territory—and the Union captured New Orleans in 1862. Only 13 Confederate-controlled cities ranked among the top 100 U.S. cities in 1860, most of them ports whose economic activities vanished or suffered severely in the Union blockade. The population of Richmond swelled after it became the Confederate capital, reaching an estimated 128,000 in 1864. Other Southern cities in the border slave-holding states such as Baltimore, Washington, D.C., Wheeling, Alexandria, Louisville, and St. Louis never came under the control of the Confederate government.",
"title": "Demographics"
},
{
"paragraph_id": 208,
"text": "The cities of the Confederacy included most prominently in order of size of population:",
"title": "Demographics"
},
{
"paragraph_id": 209,
"text": "See also Atlanta in the Civil War, Charleston, South Carolina, in the Civil War, Nashville in the Civil War, New Orleans in the Civil War, Wilmington, North Carolina, in the American Civil War, and Richmond in the Civil War).",
"title": "Demographics"
},
{
"paragraph_id": 210,
"text": "The CSA was overwhelmingly Protestant. Both free and enslaved populations identified with evangelical Protestantism. Baptists and Methodists together formed majorities of both the white and the slave population, becoming the Black church. Freedom of religion and separation of church and state were fully ensured by Confederate laws. Church attendance was very high and chaplains played a major role in the Army.",
"title": "Demographics"
},
{
"paragraph_id": 211,
"text": "Most large denominations experienced a North–South split in the prewar era on the issue of slavery. The creation of a new country necessitated independent structures. For example, the Presbyterian Church in the United States split, with much of the new leadership provided by Joseph Ruggles Wilson (father of President Woodrow Wilson). In 1861, he organized the meeting that formed the General Assembly of the Southern Presbyterian Church and served as its chief executive for 37 years. Baptists and Methodists both broke off from their Northern coreligionists over the slavery issue, forming the Southern Baptist Convention and the Methodist Episcopal Church, South, respectively. Elites in the southeast favored the Protestant Episcopal Church in the Confederate States of America, which had reluctantly split from the Episcopal Church in 1861. Other elites were Presbyterians belonging to the 1861-founded Presbyterian Church in the United States. Catholics included an Irish working-class element in coastal cities and an old French element in southern Louisiana. Other insignificant and scattered religious populations included Lutherans, the Holiness movement, other Reformed, other Christian fundamentalists, the Stone-Campbell Restoration Movement, the Churches of Christ, the Latter Day Saint movement, Adventists, Muslims, Jews, Native American animists, deists and irreligious people.",
"title": "Demographics"
},
{
"paragraph_id": 212,
"text": "The southern churches met the shortage of Army chaplains by sending missionaries. The Southern Baptists started in 1862 and had a total of 78 missionaries. Presbyterians were even more active with 112 missionaries in January 1865. Other missionaries were funded and supported by the Episcopalians, Methodists, and Lutherans. One result was wave after wave of revivals in the Army.",
"title": "Demographics"
},
{
"paragraph_id": 213,
"text": "Military leaders of the Confederacy (with their state or country of birth and highest rank) included:",
"title": "Military leaders"
}
] | The Confederate States of America (CSA), commonly referred to as the Confederate States (C.S.), the Confederacy, or the South, was an unrecognized breakaway republic in the Southern United States that existed from February 8, 1861, to May 9, 1865. The Confederacy comprised eleven U.S. states that declared secession and warred against the United States during the American Civil War. The states were South Carolina, Mississippi, Florida, Alabama, Georgia, Louisiana, Texas, Virginia, Arkansas, Tennessee, and North Carolina. The Confederacy was formed on February 8, 1861, by seven slave states: South Carolina, Mississippi, Florida, Alabama, Georgia, Louisiana, and Texas. All seven states were in the Deep South region of the United States, whose economy was heavily dependent upon agriculture, especially cotton, and a plantation system that relied upon enslaved Americans of African descent for labor. Convinced that white supremacy and slavery were threatened by the November 1860 election of Republican Abraham Lincoln to the U.S. presidency on a platform that opposed the expansion of slavery into the western territories, the seven slave states seceded from the United States, with the loyal states becoming known as the Union during the ensuing American Civil War. In the Cornerstone Speech, Confederate Vice President Alexander H. Stephens described its ideology as centrally based "upon the great truth that the negro is not equal to the white man; that slavery, subordination to the superior race, is his natural and normal condition." Before Lincoln took office on March 4, 1861, a provisional Confederate government was established on February 8, 1861. It was considered illegal by the United States government, and Northerners thought of the Confederates as traitors. After war began in April, four slave states of the Upper South—Virginia, Arkansas, Tennessee, and North Carolina—also joined the Confederacy. Four slave states, Delaware, Maryland, Kentucky, and Missouri, remained in the Union and became known as border states. The Confederacy nevertheless recognized two of them, Missouri and Kentucky, as members, accepting rump state assembly declarations of secession as authorization for full delegations of representatives and senators in the Confederate Congress. In the early part of the Civil War, the Confederacy controlled and governed more than half of Kentucky and the southern portion of Missouri, but these states were never substantially controlled by Confederate forces after 1862, despite the efforts of Confederate shadow governments, which were eventually defeated and expelled from both states. The Union rejected the claims of secession as illegitimate, while the Confederacy fully recognized them. The Civil War began on April 12, 1861, when the Confederates attacked Fort Sumter, a Union fort in the harbor of Charleston, South Carolina. No foreign government ever recognized the Confederacy as an independent country, although Great Britain and France granted it belligerent status, which allowed Confederate agents to contract with private concerns for weapons and other supplies. By 1865, the Confederacy's civilian government dissolved into chaos: the Confederate States Congress adjourned sine die, effectively ceasing to exist as a legislative body on March 18. After four years of heavy fighting, nearly all Confederate land and naval forces either surrendered or otherwise ceased hostilities by May 1865. The war lacked a clean end date, with Confederate forces surrendering or disbanding sporadically throughout most of 1865. The most significant capitulation was Confederate general Robert E. Lee's surrender to Ulysses S. Grant at Appomattox on April 9, after which any doubt about the war's outcome or the Confederacy's survival was extinguished, although another large army under Confederate general Joseph E. Johnston did not formally surrender to William T. Sherman until April 26. Contemporaneously, President Lincoln was assassinated by Confederate sympathizer John Wilkes Booth on April 15. Confederate President Jefferson Davis's administration declared the Confederacy dissolved on May 5, and acknowledged in later writings that the Confederacy "disappeared" in 1865. On May 9, 1865, U.S. President Andrew Johnson officially called an end to the armed resistance in the South. After the war, during the Reconstruction era, the Confederate states were readmitted to the Congress after each ratified the 13th Amendment to the U.S. Constitution outlawing slavery. Lost Cause mythology, an idealized view of the Confederacy valiantly fighting for a just cause, emerged in the decades after the war among former Confederate generals and politicians, and in organizations such as the United Daughters of the Confederacy and the Sons of Confederate Veterans. Intense periods of Lost Cause activity developed around the turn of the 20th century and during the civil rights movement of the 1950s and 1960s in reaction to growing support for racial equality. Advocates sought to ensure future generations of Southern whites would continue to support white supremacist policies such as the Jim Crow laws through activities such as building Confederate monuments and influencing the authors of textbooks to write on Lost Cause ideology. The modern display of Confederate flags primarily started during the 1948 presidential election, when the battle flag was used by the Dixiecrats. During the Civil Rights Movement, segregationists used it for demonstrations. | 2001-11-13T17:15:21Z | 2023-12-28T04:08:23Z | [
"Template:Reflist",
"Template:Cbignore",
"Template:Sister project links",
"Template:Use American English",
"Template:Use mdy dates",
"Template:See also",
"Template:Convert",
"Template:Cite news",
"Template:JSTOR",
"Template:OCLC",
"Template:Short description",
"Template:TOC limit",
"Template:Quote box",
"Template:Cite encyclopedia",
"Template:DANFS",
"Template:Refbegin",
"Template:Portal bar",
"Template:Redirect",
"Template:Further",
"Template:Col-break",
"Template:Div col",
"Template:Navboxes",
"Template:Small",
"Template:Cite web",
"Template:Dead link",
"Template:USS",
"Template:Internet Archive author",
"Template:Long",
"Template:Blockquote",
"Template:Col-begin",
"Template:Pp-protect",
"Template:Infobox country",
"Template:Wikisource",
"Template:Librivox author",
"Template:CS statehood and territory dates",
"Template:Cite journal",
"Template:Inflation/year",
"Template:Spaced ndash",
"Template:Cite book",
"Template:Refend",
"Template:Authority control",
"Template:Events leading to American Civil War",
"Template:Multiple image",
"Template:Page number needed",
"Template:Smallcaps all",
"Template:Col-end",
"Template:\"'",
"Template:Col-2",
"Template:Rp",
"Template:Hatnote",
"Template:ISBN",
"Template:Sic",
"Template:Main",
"Template:Clear",
"Template:Webarchive",
"Template:Ussc",
"Template:Format price",
"Template:Citation needed",
"Template:Div col end"
] | https://en.wikipedia.org/wiki/Confederate_States_of_America |
7,025 | Cranberry | Cranberries are a group of evergreen dwarf shrubs or trailing vines in the subgenus Oxycoccus of the genus Vaccinium. In Britain, cranberry may refer to the native species Vaccinium oxycoccos, while in North America, cranberry may refer to Vaccinium macrocarpon. Vaccinium oxycoccos is cultivated in central and northern Europe, while Vaccinium macrocarpon is cultivated throughout the northern United States, Canada and Chile. In some methods of classification, Oxycoccus is regarded as a genus in its own right. Cranberries can be found in acidic bogs throughout the cooler regions of the Northern Hemisphere.
Cranberries are low, creeping shrubs or vines up to 2 meters (7 ft) long and 5 to 20 centimeters (2 to 8 in) in height; they have slender, wiry stems that are not thickly woody and have small evergreen leaves. The flowers are dark pink, with very distinct reflexed petals, leaving the style and stamens fully exposed and pointing forward. They are pollinated by bees. The fruit is a berry that is larger than the leaves of the plant; it is initially light green, turning red when ripe. It is edible, but with an acidic taste that usually overwhelms its sweetness.
In 2020, the United States, Canada, and Chile accounted for 97% of the world production of cranberries. Most cranberries are processed into products such as juice, sauce, jam, and sweetened dried cranberries, with the remainder sold fresh to consumers. Cranberry sauce is a traditional accompaniment to turkey at Christmas and Thanksgiving dinners in the United States and Canada, and at Christmas dinner in the United Kingdom.
Cranberries are related to bilberries, blueberries, and huckleberries, all in Vaccinium subgenus Vaccinium. These differ in having bell-shaped flowers, petals that are not reflexed, and woodier stems, forming taller shrubs. There are 4–5 species of cranberry, classified by subgenus:
The name cranberry derives from the Middle Low German kraanbere (English translation, craneberry), first named as cranberry in English by the missionary John Eliot in 1647. Around 1694, German and Dutch colonists in New England used the word, cranberry, to represent the expanding flower, stem, calyx, and petals resembling the neck, head, and bill of a crane. The traditional English name for the plant more common in Europe, Vaccinium oxycoccos, fenberry, originated from plants with small red berries found growing in fen (marsh) lands of England.
In North America, the Narragansett people of the Algonquian nation in the regions of New England appeared to be using cranberries in pemmican for food and for dye. Calling the red berries, sasemineash, the Narragansett people may have introduced cranberries to colonists in Massachusetts. In 1550, James White Norwood made reference to Native Americans using cranberries, and it was the first reference to American cranberries up until this point. In James Rosier's book The Land of Virginia there is an account of Europeans coming ashore and being met with Native Americans bearing bark cups full of cranberries. In Plymouth, Massachusetts, there is a 1633 account of the husband of Mary Ring auctioning her cranberry-dyed petticoat for 16 shillings. In 1643, Roger Williams's book A Key into the Language of America described cranberries, referring to them as "bearberries" because bears ate them. In 1648, preacher John Elliott was quoted in Thomas Shepard's book Clear Sunshine of the Gospel with an account of the difficulties the Pilgrims were having in using the Indians to harvest cranberries as they preferred to hunt and fish. In 1663, the Pilgrim cookbook appears with a recipe for cranberry sauce. In 1667, New Englanders sent to King Charles ten barrels of cranberries, three barrels of codfish and some Indian corn as a means of appeasement for his anger over their local coining of the pine tree shilling minted by John Hull. In 1669, Captain Richard Cobb had a banquet in his house (to celebrate both his marriage to Mary Gorham and his election to the Convention of Assistance), serving wild turkey with sauce made from wild cranberries. In the 1672 book New England Rarities Discovered author John Josselyn described cranberries, writing:
Sauce for the Pilgrims, cranberry or bearberry, is a small trayling [[sic] plant that grows in salt marshes that are overgrown with moss. The berries are of a pale yellow color, afterwards red, as big as a cherry, some perfectly round, others oval, all of them hollow with sower [sic] astringent taste; they are ripe in August and September. They are excellent against the Scurvy. They are also good to allay the fervor of hoof diseases. The Indians and English use them mush, boyling [sic] them with sugar for sauce to eat with their meat; and it is a delicate sauce, especially with roasted mutton. Some make tarts with them as with gooseberries.
The Compleat Cook's Guide, published in 1683, made reference to cranberry juice. In 1703, cranberries were served at the Harvard University commencement dinner. In 1787, James Madison wrote Thomas Jefferson in France for background information on constitutional government to use at the Constitutional Convention. Jefferson sent back a number of books on the subject and in return asked for a gift of apples, pecans and cranberries. William Aiton, a Scottish botanist, included an entry for the cranberry in volume II of his 1789 work Hortus Kewensis. He notes that Vaccinium macrocarpon (American cranberry) was cultivated by James Gordon in 1760. In 1796, cranberries were served at the first celebration of the landing of the Pilgrims, and Amelia Simmons (an American orphan) wrote a book entitled American Cookery which contained a recipe for cranberry tarts.
American Revolutionary War veteran Henry Hall first cultivated cranberries in the Cape Cod town of Dennis around 1816. In the 1820s, Hall was shipping cranberries to New York City and Boston from which shipments were also sent to Europe. In 1843, Eli Howes planted his own crop of cranberries on Cape Cod, using the "Howes" variety. In 1847, Cyrus Cahoon planted a crop of "Early Black" variety near Pleasant Lake, Harwich, Massachusetts.
By 1900, 8,700 hectares (21,500 acres) were under cultivation in the New England region. In 2021, the total output of cranberries harvested in the United States was 360,000 metric tons (790 million pounds), with Wisconsin as the largest state producer (59% of total), followed by Massachusetts and Oregon.
Historically, cranberry beds were constructed in wetlands. Today's cranberry beds are constructed in upland areas with a shallow water table. The topsoil is scraped off to form dykes around the bed perimeter. Clean sand is hauled in and spread to a depth of 10 to 20 centimeters (4 to 8 in). The surface is laser leveled flat to provide even drainage. Beds are frequently drained with socked tile in addition to the perimeter ditch. In addition to making it possible to hold water, the dykes allow equipment to service the beds without driving on the vines. Irrigation equipment is installed in the bed to provide irrigation for vine growth and for spring and autumn frost protection.
A common misconception about cranberry production is that the beds remain flooded throughout the year. During the growing season cranberry beds are not flooded, but are irrigated regularly to maintain soil moisture. Beds are flooded in the autumn to facilitate harvest and again during the winter to protect against low temperatures. In cold climates like Wisconsin, New England, and eastern Canada, the winter flood typically freezes into ice, while in warmer climates the water remains liquid. When ice forms on the beds, trucks can be driven onto the ice to spread a thin layer of sand to control pests and rejuvenate the vines. Sanding is done every three to five years.
Cranberry vines are propagated by moving vines from an established bed. The vines are spread on the surface of the sand of the new bed and pushed into the sand with a blunt disk. The vines are watered frequently during the first few weeks until roots form and new shoots grow. Beds are given frequent, light application of nitrogen fertilizer during the first year. The cost of renovating cranberry beds is estimated to be between $74,000 and $124,000 per hectare ($30,000 and $50,000 per acre).
Cranberries are harvested in the fall when the fruit takes on its distinctive deep red color, and most ideally after the first frost. Berries that receive sun turn a deep red when fully ripe, while those that do not fully mature are a pale pink or white color. This is usually in September through the first part of November. To harvest cranberries, the beds are flooded with 15 to 20 centimeters (6 to 8 in) of water above the vines. A harvester is driven through the beds to remove the fruit from the vines. For the past 50 years, water reel type harvesters have been used. Harvested cranberries float in the water and can be corralled into a corner of the bed and conveyed or pumped from the bed. From the farm, cranberries are taken to receiving stations where they are cleaned, sorted, and stored prior to packaging or processing. While cranberries are harvested when they take on their deep red color, they can also be harvested beforehand when they are still white, which is how white cranberry juice is made. Yields are lower on beds harvested early and the early flooding tends to damage vines, but not severely. Vines can also be trained through dry picking to help avoid damage in subsequent harvests.
Although most cranberries are wet-picked as described above, 5–10% of the US crop is still dry-picked. This entails higher labor costs and lower yield, but dry-picked berries are less bruised and can be sold as fresh fruit instead of having to be immediately frozen or processed. Originally performed with two-handed comb scoops, dry picking is today accomplished by motorized, walk-behind harvesters which must be small enough to traverse beds without damaging the vines.
Cranberries for fresh market are stored in shallow bins or boxes with perforated or slatted bottoms, which deter decay by allowing air to circulate. Because harvest occurs in late autumn, cranberries for fresh market are frequently stored in thick walled barns without mechanical refrigeration. Temperatures are regulated by opening and closing vents in the barn as needed. Cranberries destined for processing are usually frozen in bulk containers shortly after arriving at a receiving station.
Diseases of cranberry include:
In 2020, world production of cranberry was 663,345 tonnes, mainly by the United States, Canada, and Chile, which collectively accounted for 97% of the global total (table). Wisconsin (59% of US production) and Quebec (60% of Canadian production) were the two largest regional producers of cranberries in North America. Cranberries are also a major commercial crop in Massachusetts, New Jersey, Oregon, and Washington, as well as in the Canadian province of British Columbia (33% of Canadian production).
As fresh cranberries are hard, sour, and bitter, about 95% of cranberries are processed and used to make cranberry juice and sauce. They are also sold dried and sweetened. Cranberry juice is usually sweetened or blended with other fruit juices to reduce its natural tartness. At four teaspoons of sugar per 100 grams (one teaspoon per ounce), cranberry juice cocktail is more highly sweetened than even soda drinks that have been linked to obesity.
Usually cranberries as fruit are cooked into a compote or jelly, known as cranberry sauce. Such preparations are traditionally served with roast turkey, as a staple of Thanksgiving (both in Canada and in the United States) as well as English dinners. The berry is also used in baking (muffins, scones, cakes and breads). In baking it is often combined with orange or orange zest. Less commonly, cranberries are used to add tartness to savory dishes such as soups and stews.
Fresh cranberries can be frozen at home, and will keep up to nine months; they can be used directly in recipes without thawing.
There are several alcoholic cocktails, including the Cosmopolitan, that include cranberry juice.
Raw cranberries are 87% water, 12% carbohydrates, and contain negligible protein and fat (table). In a 100 gram reference amount, raw cranberries supply 46 calories and moderate levels of vitamin C, dietary fiber, and the essential dietary mineral manganese, each with more than 10% of its Daily Value. Other micronutrients have low content (table).
Dried cranberries are commonly processed with up to 10 times their natural sugar content. The drying process also eliminates vitamin C content.
Reviews reaching differing conclusions have been reported on whether consumption of cranberry products is effective for treating urinary tract infections (UTIs) particularly in women, but also in other subject groups. The effectiveness of cranberry juice to treat UTIs has not been well studied and few or no strong randomized controlled trials have been conducted evaluating the effectiveness. Cranberry juice has not been compared to a placebo juice, and there is no evidence to support a specific dose (amount of juice that may give a clinically helpful effect) or how long the juice should be taken (duration of treatment). When the quality of meta-analyses on the efficacy of consuming cranberry products for preventing or treating UTIs is examined with the weaker evidence that is available, large variation and uncertainty of effects are seen, resulting from inconsistencies of clinical research design and inadequate numbers of subjects. In 2014, the European Food Safety Authority reviewed the evidence for one brand of cranberry extract and concluded that a cause and effect relationship had not been established between cranberry consumption and reduced risk of UTIs.
One 2017 systematic review showed that consuming cranberry products reduced the incidence of UTIs in women with recurrent infections, while another review indicated that consuming cranberry products could reduce the risk of UTIs by 26% in otherwise healthy women, although the authors indicated that larger studies were needed to confirm such an effect. However, a 2021 review found that there was insufficient evidence for or against using cranberry products to treat acute UTIs. A 2023 review of 50 studies concluded there is evidence that consuming cranberry products is effective for reducing the risk of UTIs in women with recurrent UTIs, in children, and in people susceptible to UTIs following clinical interventions; in this same review, there was little evidence of effect in elderly people, those with urination disorders, or pregnant women.
Raw cranberries, cranberry juice and cranberry extracts are a source of polyphenols – including proanthocyanidins, flavonols and quercetin. These phytochemical compounds are being studied in vivo and in vitro for possible effects on the cardiovascular system, immune system and cancer. However, there is no confirmation from human studies that consuming cranberry polyphenols provides anti-cancer, immune, or cardiovascular benefits. Potential is limited by poor absorption and rapid excretion.
Cranberry juice contains a high molecular weight non-dializable material that is under research for its potential to affect formation of plaque by Streptococcus mutans pathogens that cause tooth decay. Cranberry juice components are also being studied for possible effects on kidney stone formation.
Problems may arise with the lack of validation for quantifying of A-type proanthocyanidins (PAC) extracted from cranberries. For instance, PAC extract quality and content can be performed using different methods including the European Pharmacopoeia method, liquid chromatography–mass spectrometry, or a modified 4-dimethylaminocinnamaldehyde colorimetric method. Variations in extract analysis can lead to difficulties in assessing the quality of PAC extracts from different cranberry starting material, such as by regional origin, ripeness at time of harvest and post-harvest processing. Assessments show that quality varies greatly from one commercial PAC extract product to another.
The anticoagulant effects of warfarin may be increased by consuming cranberry juice, resulting in adverse effects such as increased incidence of bleeding and bruising. Other safety concerns from consuming large quantities of cranberry juice or using cranberry supplements include potential for nausea, and increasing stomach inflammation, sugar intake or kidney stone formation.
Cranberry sales in the United States have traditionally been associated with holidays of Thanksgiving and Christmas.
In the U.S., large-scale cranberry cultivation has been developed as opposed to other countries. American cranberry growers have a long history of cooperative marketing. As early as 1904, John Gaynor, a Wisconsin grower, and A.U. Chaney, a fruit broker from Des Moines, Iowa, organized Wisconsin growers into a cooperative called the Wisconsin Cranberry Sales Company to receive a uniform price from buyers. Growers in New Jersey and Massachusetts were also organized into cooperatives, creating the National Fruit Exchange that marketed fruit under the Eatmor brand. The success of cooperative marketing almost led to its failure. With consistent and high prices, area and production doubled between 1903 and 1917 and prices fell.
With surplus cranberries and changing American households some enterprising growers began canning cranberries that were below-grade for fresh market. Competition between canners was fierce because profits were thin. The Ocean Spray cooperative was established in 1930 through a merger of three primary processing companies: Ocean Spray Preserving company, Makepeace Preserving Co, and Cranberry Products Co. The new company was called Cranberry Canners, Inc. and used the Ocean Spray label on their products. Since the new company represented over 90% of the market, it would have been illegal under American antitrust laws had attorney John Quarles not found an exemption for agricultural cooperatives. As of 2006, about 65% of the North American industry belongs to the Ocean Spray cooperative.
In 1958, Morris April Brothers—who produced Eatmor brand cranberry sauce in Tuckahoe, New Jersey—brought an action against Ocean Spray for violation of the Sherman Antitrust Act and won $200,000 in real damages plus triple damages, just in time for the Great Cranberry Scare: on 9 November 1959, Secretary of the United States Department of Health, Education, and Welfare Arthur S. Flemming announced that some of the 1959 cranberry crop was tainted with traces of the herbicide aminotriazole. The market for cranberries collapsed and growers lost millions of dollars. However, the scare taught the industry that they could not be completely dependent on the holiday market for their products; they had to find year-round markets for their fruit. They also had to be exceedingly careful about their use of pesticides. After the aminotriazole scare, Ocean Spray reorganized and spent substantial sums on product development. New products such as cranberry-apple juice blends were introduced, followed by other juice blends.
Prices and production increased steadily during the 1980s and 1990s. Prices peaked at about $65.00 per barrel ($0.65 per pound or $1.43 per kilogram)—a cranberry barrel equals 100 pounds or 45.4 kilograms—in 1996 then fell to $18.00 per barrel ($0.18 per pound or $0.40 per kilogram) in 2001. The cause for the precipitous drop was classic oversupply. Production had outpaced consumption leading to substantial inventory in freezers or as concentrate.
Cranberry handlers (processors) include Ocean Spray, Cliffstar Corporation, Northland Cranberries Inc. (Sun Northland LLC), Clement Pappas & Co., and Decas Cranberry Products as well as a number of small handlers and processors.
The Cranberry Marketing Committee is an organization that was established in 1962 as a Federal Marketing Order to ensure a stable, orderly supply of good quality product. The order has been renewed and modified slightly over the years. The market order has been invoked during six crop years: 1962 (12%), 1963 (5%), 1970 (10%), 1971 (12%), 2000 (15%), and 2001 (35%). Even though supply still exceeds demand, there is little will to invoke the Federal Marketing Order out of the realization that any pullback in supply by U.S. growers would easily be filled by Canadian production.
The Cranberry Marketing Committee, based in Wareham, Massachusetts, represents more than 1,100 cranberry growers and 60 cranberry handlers across Massachusetts, Rhode Island, Connecticut, New Jersey, Wisconsin, Michigan, Minnesota, Oregon, Washington and New York (Long Island). The authority for the actions taken by the Cranberry Marketing Committee is provided in Chapter IX, Title 7, Code of Federal Regulations which is called the Federal Cranberry Marketing Order. The Order is part of the Agricultural Marketing Agreement Act of 1937, identifying cranberries as a commodity good that can be regulated by Congress. The Federal Cranberry Marketing Order has been altered over the years to expand the Cranberry Marketing Committee's ability to develop projects in the United States and around the world. The Cranberry Marketing Committee currently runs promotional programs in the United States, China, India, Mexico, Pan-Europe, and South Korea.
As of 2016, the European Union was the largest importer of American cranberries, followed individually by Canada, China, Mexico, and South Korea. From 2013 to 2017, U.S. cranberry exports to China grew exponentially, making China the second largest country importer, reaching $36 million in cranberry products. The China–United States trade war resulted in many Chinese businesses cutting off ties with their U.S. cranberry suppliers.
Notes
Further reading
Media related to Cranberries at Wikimedia Commons | [
{
"paragraph_id": 0,
"text": "Cranberries are a group of evergreen dwarf shrubs or trailing vines in the subgenus Oxycoccus of the genus Vaccinium. In Britain, cranberry may refer to the native species Vaccinium oxycoccos, while in North America, cranberry may refer to Vaccinium macrocarpon. Vaccinium oxycoccos is cultivated in central and northern Europe, while Vaccinium macrocarpon is cultivated throughout the northern United States, Canada and Chile. In some methods of classification, Oxycoccus is regarded as a genus in its own right. Cranberries can be found in acidic bogs throughout the cooler regions of the Northern Hemisphere.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Cranberries are low, creeping shrubs or vines up to 2 meters (7 ft) long and 5 to 20 centimeters (2 to 8 in) in height; they have slender, wiry stems that are not thickly woody and have small evergreen leaves. The flowers are dark pink, with very distinct reflexed petals, leaving the style and stamens fully exposed and pointing forward. They are pollinated by bees. The fruit is a berry that is larger than the leaves of the plant; it is initially light green, turning red when ripe. It is edible, but with an acidic taste that usually overwhelms its sweetness.",
"title": ""
},
{
"paragraph_id": 2,
"text": "In 2020, the United States, Canada, and Chile accounted for 97% of the world production of cranberries. Most cranberries are processed into products such as juice, sauce, jam, and sweetened dried cranberries, with the remainder sold fresh to consumers. Cranberry sauce is a traditional accompaniment to turkey at Christmas and Thanksgiving dinners in the United States and Canada, and at Christmas dinner in the United Kingdom.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Cranberries are related to bilberries, blueberries, and huckleberries, all in Vaccinium subgenus Vaccinium. These differ in having bell-shaped flowers, petals that are not reflexed, and woodier stems, forming taller shrubs. There are 4–5 species of cranberry, classified by subgenus:",
"title": "Species and description"
},
{
"paragraph_id": 4,
"text": "The name cranberry derives from the Middle Low German kraanbere (English translation, craneberry), first named as cranberry in English by the missionary John Eliot in 1647. Around 1694, German and Dutch colonists in New England used the word, cranberry, to represent the expanding flower, stem, calyx, and petals resembling the neck, head, and bill of a crane. The traditional English name for the plant more common in Europe, Vaccinium oxycoccos, fenberry, originated from plants with small red berries found growing in fen (marsh) lands of England.",
"title": "Etymology"
},
{
"paragraph_id": 5,
"text": "In North America, the Narragansett people of the Algonquian nation in the regions of New England appeared to be using cranberries in pemmican for food and for dye. Calling the red berries, sasemineash, the Narragansett people may have introduced cranberries to colonists in Massachusetts. In 1550, James White Norwood made reference to Native Americans using cranberries, and it was the first reference to American cranberries up until this point. In James Rosier's book The Land of Virginia there is an account of Europeans coming ashore and being met with Native Americans bearing bark cups full of cranberries. In Plymouth, Massachusetts, there is a 1633 account of the husband of Mary Ring auctioning her cranberry-dyed petticoat for 16 shillings. In 1643, Roger Williams's book A Key into the Language of America described cranberries, referring to them as \"bearberries\" because bears ate them. In 1648, preacher John Elliott was quoted in Thomas Shepard's book Clear Sunshine of the Gospel with an account of the difficulties the Pilgrims were having in using the Indians to harvest cranberries as they preferred to hunt and fish. In 1663, the Pilgrim cookbook appears with a recipe for cranberry sauce. In 1667, New Englanders sent to King Charles ten barrels of cranberries, three barrels of codfish and some Indian corn as a means of appeasement for his anger over their local coining of the pine tree shilling minted by John Hull. In 1669, Captain Richard Cobb had a banquet in his house (to celebrate both his marriage to Mary Gorham and his election to the Convention of Assistance), serving wild turkey with sauce made from wild cranberries. In the 1672 book New England Rarities Discovered author John Josselyn described cranberries, writing:",
"title": "History"
},
{
"paragraph_id": 6,
"text": "Sauce for the Pilgrims, cranberry or bearberry, is a small trayling [[sic] plant that grows in salt marshes that are overgrown with moss. The berries are of a pale yellow color, afterwards red, as big as a cherry, some perfectly round, others oval, all of them hollow with sower [sic] astringent taste; they are ripe in August and September. They are excellent against the Scurvy. They are also good to allay the fervor of hoof diseases. The Indians and English use them mush, boyling [sic] them with sugar for sauce to eat with their meat; and it is a delicate sauce, especially with roasted mutton. Some make tarts with them as with gooseberries.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "The Compleat Cook's Guide, published in 1683, made reference to cranberry juice. In 1703, cranberries were served at the Harvard University commencement dinner. In 1787, James Madison wrote Thomas Jefferson in France for background information on constitutional government to use at the Constitutional Convention. Jefferson sent back a number of books on the subject and in return asked for a gift of apples, pecans and cranberries. William Aiton, a Scottish botanist, included an entry for the cranberry in volume II of his 1789 work Hortus Kewensis. He notes that Vaccinium macrocarpon (American cranberry) was cultivated by James Gordon in 1760. In 1796, cranberries were served at the first celebration of the landing of the Pilgrims, and Amelia Simmons (an American orphan) wrote a book entitled American Cookery which contained a recipe for cranberry tarts.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "American Revolutionary War veteran Henry Hall first cultivated cranberries in the Cape Cod town of Dennis around 1816. In the 1820s, Hall was shipping cranberries to New York City and Boston from which shipments were also sent to Europe. In 1843, Eli Howes planted his own crop of cranberries on Cape Cod, using the \"Howes\" variety. In 1847, Cyrus Cahoon planted a crop of \"Early Black\" variety near Pleasant Lake, Harwich, Massachusetts.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "By 1900, 8,700 hectares (21,500 acres) were under cultivation in the New England region. In 2021, the total output of cranberries harvested in the United States was 360,000 metric tons (790 million pounds), with Wisconsin as the largest state producer (59% of total), followed by Massachusetts and Oregon.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "Historically, cranberry beds were constructed in wetlands. Today's cranberry beds are constructed in upland areas with a shallow water table. The topsoil is scraped off to form dykes around the bed perimeter. Clean sand is hauled in and spread to a depth of 10 to 20 centimeters (4 to 8 in). The surface is laser leveled flat to provide even drainage. Beds are frequently drained with socked tile in addition to the perimeter ditch. In addition to making it possible to hold water, the dykes allow equipment to service the beds without driving on the vines. Irrigation equipment is installed in the bed to provide irrigation for vine growth and for spring and autumn frost protection.",
"title": "Cultivation"
},
{
"paragraph_id": 11,
"text": "A common misconception about cranberry production is that the beds remain flooded throughout the year. During the growing season cranberry beds are not flooded, but are irrigated regularly to maintain soil moisture. Beds are flooded in the autumn to facilitate harvest and again during the winter to protect against low temperatures. In cold climates like Wisconsin, New England, and eastern Canada, the winter flood typically freezes into ice, while in warmer climates the water remains liquid. When ice forms on the beds, trucks can be driven onto the ice to spread a thin layer of sand to control pests and rejuvenate the vines. Sanding is done every three to five years.",
"title": "Cultivation"
},
{
"paragraph_id": 12,
"text": "Cranberry vines are propagated by moving vines from an established bed. The vines are spread on the surface of the sand of the new bed and pushed into the sand with a blunt disk. The vines are watered frequently during the first few weeks until roots form and new shoots grow. Beds are given frequent, light application of nitrogen fertilizer during the first year. The cost of renovating cranberry beds is estimated to be between $74,000 and $124,000 per hectare ($30,000 and $50,000 per acre).",
"title": "Cultivation"
},
{
"paragraph_id": 13,
"text": "Cranberries are harvested in the fall when the fruit takes on its distinctive deep red color, and most ideally after the first frost. Berries that receive sun turn a deep red when fully ripe, while those that do not fully mature are a pale pink or white color. This is usually in September through the first part of November. To harvest cranberries, the beds are flooded with 15 to 20 centimeters (6 to 8 in) of water above the vines. A harvester is driven through the beds to remove the fruit from the vines. For the past 50 years, water reel type harvesters have been used. Harvested cranberries float in the water and can be corralled into a corner of the bed and conveyed or pumped from the bed. From the farm, cranberries are taken to receiving stations where they are cleaned, sorted, and stored prior to packaging or processing. While cranberries are harvested when they take on their deep red color, they can also be harvested beforehand when they are still white, which is how white cranberry juice is made. Yields are lower on beds harvested early and the early flooding tends to damage vines, but not severely. Vines can also be trained through dry picking to help avoid damage in subsequent harvests.",
"title": "Cultivation"
},
{
"paragraph_id": 14,
"text": "Although most cranberries are wet-picked as described above, 5–10% of the US crop is still dry-picked. This entails higher labor costs and lower yield, but dry-picked berries are less bruised and can be sold as fresh fruit instead of having to be immediately frozen or processed. Originally performed with two-handed comb scoops, dry picking is today accomplished by motorized, walk-behind harvesters which must be small enough to traverse beds without damaging the vines.",
"title": "Cultivation"
},
{
"paragraph_id": 15,
"text": "Cranberries for fresh market are stored in shallow bins or boxes with perforated or slatted bottoms, which deter decay by allowing air to circulate. Because harvest occurs in late autumn, cranberries for fresh market are frequently stored in thick walled barns without mechanical refrigeration. Temperatures are regulated by opening and closing vents in the barn as needed. Cranberries destined for processing are usually frozen in bulk containers shortly after arriving at a receiving station.",
"title": "Cultivation"
},
{
"paragraph_id": 16,
"text": "Diseases of cranberry include:",
"title": "Cultivation"
},
{
"paragraph_id": 17,
"text": "In 2020, world production of cranberry was 663,345 tonnes, mainly by the United States, Canada, and Chile, which collectively accounted for 97% of the global total (table). Wisconsin (59% of US production) and Quebec (60% of Canadian production) were the two largest regional producers of cranberries in North America. Cranberries are also a major commercial crop in Massachusetts, New Jersey, Oregon, and Washington, as well as in the Canadian province of British Columbia (33% of Canadian production).",
"title": "Production"
},
{
"paragraph_id": 18,
"text": "As fresh cranberries are hard, sour, and bitter, about 95% of cranberries are processed and used to make cranberry juice and sauce. They are also sold dried and sweetened. Cranberry juice is usually sweetened or blended with other fruit juices to reduce its natural tartness. At four teaspoons of sugar per 100 grams (one teaspoon per ounce), cranberry juice cocktail is more highly sweetened than even soda drinks that have been linked to obesity.",
"title": "Food uses"
},
{
"paragraph_id": 19,
"text": "Usually cranberries as fruit are cooked into a compote or jelly, known as cranberry sauce. Such preparations are traditionally served with roast turkey, as a staple of Thanksgiving (both in Canada and in the United States) as well as English dinners. The berry is also used in baking (muffins, scones, cakes and breads). In baking it is often combined with orange or orange zest. Less commonly, cranberries are used to add tartness to savory dishes such as soups and stews.",
"title": "Food uses"
},
{
"paragraph_id": 20,
"text": "Fresh cranberries can be frozen at home, and will keep up to nine months; they can be used directly in recipes without thawing.",
"title": "Food uses"
},
{
"paragraph_id": 21,
"text": "There are several alcoholic cocktails, including the Cosmopolitan, that include cranberry juice.",
"title": "Food uses"
},
{
"paragraph_id": 22,
"text": "Raw cranberries are 87% water, 12% carbohydrates, and contain negligible protein and fat (table). In a 100 gram reference amount, raw cranberries supply 46 calories and moderate levels of vitamin C, dietary fiber, and the essential dietary mineral manganese, each with more than 10% of its Daily Value. Other micronutrients have low content (table).",
"title": "Food uses"
},
{
"paragraph_id": 23,
"text": "Dried cranberries are commonly processed with up to 10 times their natural sugar content. The drying process also eliminates vitamin C content.",
"title": "Food uses"
},
{
"paragraph_id": 24,
"text": "Reviews reaching differing conclusions have been reported on whether consumption of cranberry products is effective for treating urinary tract infections (UTIs) particularly in women, but also in other subject groups. The effectiveness of cranberry juice to treat UTIs has not been well studied and few or no strong randomized controlled trials have been conducted evaluating the effectiveness. Cranberry juice has not been compared to a placebo juice, and there is no evidence to support a specific dose (amount of juice that may give a clinically helpful effect) or how long the juice should be taken (duration of treatment). When the quality of meta-analyses on the efficacy of consuming cranberry products for preventing or treating UTIs is examined with the weaker evidence that is available, large variation and uncertainty of effects are seen, resulting from inconsistencies of clinical research design and inadequate numbers of subjects. In 2014, the European Food Safety Authority reviewed the evidence for one brand of cranberry extract and concluded that a cause and effect relationship had not been established between cranberry consumption and reduced risk of UTIs.",
"title": "Research"
},
{
"paragraph_id": 25,
"text": "One 2017 systematic review showed that consuming cranberry products reduced the incidence of UTIs in women with recurrent infections, while another review indicated that consuming cranberry products could reduce the risk of UTIs by 26% in otherwise healthy women, although the authors indicated that larger studies were needed to confirm such an effect. However, a 2021 review found that there was insufficient evidence for or against using cranberry products to treat acute UTIs. A 2023 review of 50 studies concluded there is evidence that consuming cranberry products is effective for reducing the risk of UTIs in women with recurrent UTIs, in children, and in people susceptible to UTIs following clinical interventions; in this same review, there was little evidence of effect in elderly people, those with urination disorders, or pregnant women.",
"title": "Research"
},
{
"paragraph_id": 26,
"text": "Raw cranberries, cranberry juice and cranberry extracts are a source of polyphenols – including proanthocyanidins, flavonols and quercetin. These phytochemical compounds are being studied in vivo and in vitro for possible effects on the cardiovascular system, immune system and cancer. However, there is no confirmation from human studies that consuming cranberry polyphenols provides anti-cancer, immune, or cardiovascular benefits. Potential is limited by poor absorption and rapid excretion.",
"title": "Research"
},
{
"paragraph_id": 27,
"text": "Cranberry juice contains a high molecular weight non-dializable material that is under research for its potential to affect formation of plaque by Streptococcus mutans pathogens that cause tooth decay. Cranberry juice components are also being studied for possible effects on kidney stone formation.",
"title": "Research"
},
{
"paragraph_id": 28,
"text": "Problems may arise with the lack of validation for quantifying of A-type proanthocyanidins (PAC) extracted from cranberries. For instance, PAC extract quality and content can be performed using different methods including the European Pharmacopoeia method, liquid chromatography–mass spectrometry, or a modified 4-dimethylaminocinnamaldehyde colorimetric method. Variations in extract analysis can lead to difficulties in assessing the quality of PAC extracts from different cranberry starting material, such as by regional origin, ripeness at time of harvest and post-harvest processing. Assessments show that quality varies greatly from one commercial PAC extract product to another.",
"title": "Research"
},
{
"paragraph_id": 29,
"text": "The anticoagulant effects of warfarin may be increased by consuming cranberry juice, resulting in adverse effects such as increased incidence of bleeding and bruising. Other safety concerns from consuming large quantities of cranberry juice or using cranberry supplements include potential for nausea, and increasing stomach inflammation, sugar intake or kidney stone formation.",
"title": "Research"
},
{
"paragraph_id": 30,
"text": "Cranberry sales in the United States have traditionally been associated with holidays of Thanksgiving and Christmas.",
"title": "Marketing and economics"
},
{
"paragraph_id": 31,
"text": "In the U.S., large-scale cranberry cultivation has been developed as opposed to other countries. American cranberry growers have a long history of cooperative marketing. As early as 1904, John Gaynor, a Wisconsin grower, and A.U. Chaney, a fruit broker from Des Moines, Iowa, organized Wisconsin growers into a cooperative called the Wisconsin Cranberry Sales Company to receive a uniform price from buyers. Growers in New Jersey and Massachusetts were also organized into cooperatives, creating the National Fruit Exchange that marketed fruit under the Eatmor brand. The success of cooperative marketing almost led to its failure. With consistent and high prices, area and production doubled between 1903 and 1917 and prices fell.",
"title": "Marketing and economics"
},
{
"paragraph_id": 32,
"text": "With surplus cranberries and changing American households some enterprising growers began canning cranberries that were below-grade for fresh market. Competition between canners was fierce because profits were thin. The Ocean Spray cooperative was established in 1930 through a merger of three primary processing companies: Ocean Spray Preserving company, Makepeace Preserving Co, and Cranberry Products Co. The new company was called Cranberry Canners, Inc. and used the Ocean Spray label on their products. Since the new company represented over 90% of the market, it would have been illegal under American antitrust laws had attorney John Quarles not found an exemption for agricultural cooperatives. As of 2006, about 65% of the North American industry belongs to the Ocean Spray cooperative.",
"title": "Marketing and economics"
},
{
"paragraph_id": 33,
"text": "In 1958, Morris April Brothers—who produced Eatmor brand cranberry sauce in Tuckahoe, New Jersey—brought an action against Ocean Spray for violation of the Sherman Antitrust Act and won $200,000 in real damages plus triple damages, just in time for the Great Cranberry Scare: on 9 November 1959, Secretary of the United States Department of Health, Education, and Welfare Arthur S. Flemming announced that some of the 1959 cranberry crop was tainted with traces of the herbicide aminotriazole. The market for cranberries collapsed and growers lost millions of dollars. However, the scare taught the industry that they could not be completely dependent on the holiday market for their products; they had to find year-round markets for their fruit. They also had to be exceedingly careful about their use of pesticides. After the aminotriazole scare, Ocean Spray reorganized and spent substantial sums on product development. New products such as cranberry-apple juice blends were introduced, followed by other juice blends.",
"title": "Marketing and economics"
},
{
"paragraph_id": 34,
"text": "Prices and production increased steadily during the 1980s and 1990s. Prices peaked at about $65.00 per barrel ($0.65 per pound or $1.43 per kilogram)—a cranberry barrel equals 100 pounds or 45.4 kilograms—in 1996 then fell to $18.00 per barrel ($0.18 per pound or $0.40 per kilogram) in 2001. The cause for the precipitous drop was classic oversupply. Production had outpaced consumption leading to substantial inventory in freezers or as concentrate.",
"title": "Marketing and economics"
},
{
"paragraph_id": 35,
"text": "Cranberry handlers (processors) include Ocean Spray, Cliffstar Corporation, Northland Cranberries Inc. (Sun Northland LLC), Clement Pappas & Co., and Decas Cranberry Products as well as a number of small handlers and processors.",
"title": "Marketing and economics"
},
{
"paragraph_id": 36,
"text": "The Cranberry Marketing Committee is an organization that was established in 1962 as a Federal Marketing Order to ensure a stable, orderly supply of good quality product. The order has been renewed and modified slightly over the years. The market order has been invoked during six crop years: 1962 (12%), 1963 (5%), 1970 (10%), 1971 (12%), 2000 (15%), and 2001 (35%). Even though supply still exceeds demand, there is little will to invoke the Federal Marketing Order out of the realization that any pullback in supply by U.S. growers would easily be filled by Canadian production.",
"title": "Marketing and economics"
},
{
"paragraph_id": 37,
"text": "The Cranberry Marketing Committee, based in Wareham, Massachusetts, represents more than 1,100 cranberry growers and 60 cranberry handlers across Massachusetts, Rhode Island, Connecticut, New Jersey, Wisconsin, Michigan, Minnesota, Oregon, Washington and New York (Long Island). The authority for the actions taken by the Cranberry Marketing Committee is provided in Chapter IX, Title 7, Code of Federal Regulations which is called the Federal Cranberry Marketing Order. The Order is part of the Agricultural Marketing Agreement Act of 1937, identifying cranberries as a commodity good that can be regulated by Congress. The Federal Cranberry Marketing Order has been altered over the years to expand the Cranberry Marketing Committee's ability to develop projects in the United States and around the world. The Cranberry Marketing Committee currently runs promotional programs in the United States, China, India, Mexico, Pan-Europe, and South Korea.",
"title": "Marketing and economics"
},
{
"paragraph_id": 38,
"text": "As of 2016, the European Union was the largest importer of American cranberries, followed individually by Canada, China, Mexico, and South Korea. From 2013 to 2017, U.S. cranberry exports to China grew exponentially, making China the second largest country importer, reaching $36 million in cranberry products. The China–United States trade war resulted in many Chinese businesses cutting off ties with their U.S. cranberry suppliers.",
"title": "Marketing and economics"
},
{
"paragraph_id": 39,
"text": "Notes",
"title": "References"
},
{
"paragraph_id": 40,
"text": "Further reading",
"title": "References"
},
{
"paragraph_id": 41,
"text": "Media related to Cranberries at Wikimedia Commons",
"title": "External links"
}
] | Cranberries are a group of evergreen dwarf shrubs or trailing vines in the subgenus Oxycoccus of the genus Vaccinium. In Britain, cranberry may refer to the native species Vaccinium oxycoccos, while in North America, cranberry may refer to Vaccinium macrocarpon. Vaccinium oxycoccos is cultivated in central and northern Europe, while Vaccinium macrocarpon is cultivated throughout the northern United States, Canada and Chile. In some methods of classification, Oxycoccus is regarded as a genus in its own right. Cranberries can be found in acidic bogs throughout the cooler regions of the Northern Hemisphere. Cranberries are low, creeping shrubs or vines up to 2 meters (7 ft) long and 5 to 20 centimeters in height; they have slender, wiry stems that are not thickly woody and have small evergreen leaves. The flowers are dark pink, with very distinct reflexed petals, leaving the style and stamens fully exposed and pointing forward. They are pollinated by bees. The fruit is a berry that is larger than the leaves of the plant; it is initially light green, turning red when ripe. It is edible, but with an acidic taste that usually overwhelms its sweetness. In 2020, the United States, Canada, and Chile accounted for 97% of the world production of cranberries. Most cranberries are processed into products such as juice, sauce, jam, and sweetened dried cranberries, with the remainder sold fresh to consumers. Cranberry sauce is a traditional accompaniment to turkey at Christmas and Thanksgiving dinners in the United States and Canada, and at Christmas dinner in the United Kingdom. | 2001-11-06T22:13:14Z | 2023-12-25T07:15:05Z | [
"Template:Taxonbar",
"Template:Use dmy dates",
"Template:Infobox agricultural production",
"Template:Cite journal",
"Template:Cite news",
"Template:Commons category-inline",
"Template:Cvt",
"Template:More citations needed",
"Template:Update inline",
"Template:Automatic taxobox",
"Template:Citation needed",
"Template:Cite book",
"Template:ISBN",
"Template:Redirect-synonym",
"Template:Use American English",
"Template:When",
"Template:Ref needed",
"Template:Reflist",
"Template:Main",
"Template:PLANTS",
"Template:About",
"Template:Short description",
"Template:Infobox nutritional value",
"Template:As of",
"Template:Cite web",
"Template:Cookbook",
"Template:Convert",
"Template:NIE Poster",
"Template:Authority control"
] | https://en.wikipedia.org/wiki/Cranberry |
7,030 | Code coverage | In software engineering, code coverage is a percentage measure of the degree to which the source code of a program is executed when a particular test suite is run. A program with high test coverage has more of its source code executed during testing, which suggests it has a lower chance of containing undetected software bugs compared to a program with low test coverage. Many different metrics can be used to calculate test coverage. Some of the most basic are the percentage of program subroutines and the percentage of program statements called during execution of the test suite.
Test coverage was among the first methods invented for systematic software testing. The first published reference was by Miller and Maloney in Communications of the ACM, in 1963.
To measure what percentage of code has been executed by a test suite, one or more coverage criteria are used. These are usually defined as rules or requirements, which a test suite must satisfy.
There are a number of coverage criteria, but the main ones are:
For example, consider the following C function:
Assume this function is a part of some bigger program and this program was run with some test suite.
In programming languages that do not perform short-circuit evaluation, condition coverage does not necessarily imply branch coverage. For example, consider the following Pascal code fragment:
Condition coverage can be satisfied by two tests:
However, this set of tests does not satisfy branch coverage since neither case will meet the if condition.
Fault injection may be necessary to ensure that all conditions and branches of exception-handling code have adequate coverage during testing.
A combination of function coverage and branch coverage is sometimes also called decision coverage. This criterion requires that every point of entry and exit in the program has been invoked at least once, and every decision in the program has taken on all possible outcomes at least once. In this context, the decision is a boolean expression comprising conditions and zero or more boolean operators. This definition is not the same as branch coverage, however, the term decision coverage is sometimes used as a synonym for it.
Condition/decision coverage requires that both decision and condition coverage be satisfied. However, for safety-critical applications (such as avionics software) it is often required that modified condition/decision coverage (MC/DC) be satisfied. This criterion extends condition/decision criteria with requirements that each condition should affect the decision outcome independently.
For example, consider the following code:
The condition/decision criteria will be satisfied by the following set of tests:
However, the above tests set will not satisfy modified condition/decision coverage, since in the first test, the value of 'b' and in the second test the value of 'c' would not influence the output. So, the following test set is needed to satisfy MC/DC:
This criterion requires that all combinations of conditions inside each decision are tested. For example, the code fragment from the previous section will require eight tests:
Parameter value coverage (PVC) requires that in a method taking parameters, all the common values for such parameters be considered. The idea is that all common possible values for a parameter are tested. For example, common values for a string are: 1) null, 2) empty, 3) whitespace (space, tabs, newline), 4) valid string, 5) invalid string, 6) single-byte string, 7) double-byte string. It may also be appropriate to use very long strings. Failure to test each possible parameter value may result in a bug. Testing only one of these could result in 100% code coverage as each line is covered, but as only one of seven options are tested, there is only 14.2% PVC.
There are further coverage criteria, which are used less often:
Safety-critical or dependable applications are often required to demonstrate 100% of some form of test coverage. For example, the ECSS-E-ST-40C standard demands 100% statement and decision coverage for two out of four different criticality levels; for the other ones, target coverage values are up to negotiation between supplier and customer. However, setting specific target values - and, in particular, 100% - has been criticized by practitioners for various reasons (cf.) Martin Fowler writes: "I would be suspicious of anything like 100% - it would smell of someone writing tests to make the coverage numbers happy, but not thinking about what they are doing".
Some of the coverage criteria above are connected. For instance, path coverage implies decision, statement and entry/exit coverage. Decision coverage implies statement coverage, because every statement is part of a branch.
Full path coverage, of the type described above, is usually impractical or impossible. Any module with a succession of n {\displaystyle n} decisions in it can have up to 2 n {\displaystyle 2^{n}} paths within it; loop constructs can result in an infinite number of paths. Many paths may also be infeasible, in that there is no input to the program under test that can cause that particular path to be executed. However, a general-purpose algorithm for identifying infeasible paths has been proven to be impossible (such an algorithm could be used to solve the halting problem). Basis path testing is for instance a method of achieving complete branch coverage without achieving complete path coverage.
Methods for practical path coverage testing instead attempt to identify classes of code paths that differ only in the number of loop executions, and to achieve "basis path" coverage the tester must cover all the path classes.
The target software is built with special options or libraries and run under a controlled environment, to map every executed function to the function points in the source code. This allows testing parts of the target software that are rarely or never accessed under normal conditions, and helps reassure that the most important conditions (function points) have been tested. The resulting output is then analyzed to see what areas of code have not been exercised and the tests are updated to include these areas as necessary. Combined with other test coverage methods, the aim is to develop a rigorous, yet manageable, set of regression tests.
In implementing test coverage policies within a software development environment, one must consider the following:
Software authors can look at test coverage results to devise additional tests and input or configuration sets to increase the coverage over vital functions. Two common forms of test coverage are statement (or line) coverage and branch (or edge) coverage. Line coverage reports on the execution footprint of testing in terms of which lines of code were executed to complete the test. Edge coverage reports which branches or code decision points were executed to complete the test. They both report a coverage metric, measured as a percentage. The meaning of this depends on what form(s) of coverage have been used, as 67% branch coverage is more comprehensive than 67% statement coverage.
Generally, test coverage tools incur computation and logging in addition to the actual program thereby slowing down the application, so typically this analysis is not done in production. As one might expect, there are classes of software that cannot be feasibly subjected to these coverage tests, though a degree of coverage mapping can be approximated through analysis rather than direct testing.
There are also some sorts of defects which are affected by such tools. In particular, some race conditions or similar real time sensitive operations can be masked when run under test environments; though conversely, some of these defects may become easier to find as a result of the additional overhead of the testing code.
Most professional software developers use C1 and C2 coverage. C1 stands for statement coverage and C2 for branch or condition coverage. With a combination of C1 and C2, it is possible to cover most statements in a code base. Statement coverage would also cover function coverage with entry and exit, loop, path, state flow, control flow and data flow coverage. With these methods, it is possible to achieve nearly 100% code coverage in most software projects.
Test coverage is one consideration in the safety certification of avionics equipment. The guidelines by which avionics gear is certified by the Federal Aviation Administration (FAA) is documented in DO-178B and DO-178C.
Test coverage is also a requirement in part 6 of the automotive safety standard ISO 26262 Road Vehicles - Functional Safety. | [
{
"paragraph_id": 0,
"text": "In software engineering, code coverage is a percentage measure of the degree to which the source code of a program is executed when a particular test suite is run. A program with high test coverage has more of its source code executed during testing, which suggests it has a lower chance of containing undetected software bugs compared to a program with low test coverage. Many different metrics can be used to calculate test coverage. Some of the most basic are the percentage of program subroutines and the percentage of program statements called during execution of the test suite.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Test coverage was among the first methods invented for systematic software testing. The first published reference was by Miller and Maloney in Communications of the ACM, in 1963.",
"title": ""
},
{
"paragraph_id": 2,
"text": "To measure what percentage of code has been executed by a test suite, one or more coverage criteria are used. These are usually defined as rules or requirements, which a test suite must satisfy.",
"title": "Coverage criteria"
},
{
"paragraph_id": 3,
"text": "There are a number of coverage criteria, but the main ones are:",
"title": "Coverage criteria"
},
{
"paragraph_id": 4,
"text": "For example, consider the following C function:",
"title": "Coverage criteria"
},
{
"paragraph_id": 5,
"text": "Assume this function is a part of some bigger program and this program was run with some test suite.",
"title": "Coverage criteria"
},
{
"paragraph_id": 6,
"text": "In programming languages that do not perform short-circuit evaluation, condition coverage does not necessarily imply branch coverage. For example, consider the following Pascal code fragment:",
"title": "Coverage criteria"
},
{
"paragraph_id": 7,
"text": "Condition coverage can be satisfied by two tests:",
"title": "Coverage criteria"
},
{
"paragraph_id": 8,
"text": "However, this set of tests does not satisfy branch coverage since neither case will meet the if condition.",
"title": "Coverage criteria"
},
{
"paragraph_id": 9,
"text": "Fault injection may be necessary to ensure that all conditions and branches of exception-handling code have adequate coverage during testing.",
"title": "Coverage criteria"
},
{
"paragraph_id": 10,
"text": "A combination of function coverage and branch coverage is sometimes also called decision coverage. This criterion requires that every point of entry and exit in the program has been invoked at least once, and every decision in the program has taken on all possible outcomes at least once. In this context, the decision is a boolean expression comprising conditions and zero or more boolean operators. This definition is not the same as branch coverage, however, the term decision coverage is sometimes used as a synonym for it.",
"title": "Coverage criteria"
},
{
"paragraph_id": 11,
"text": "Condition/decision coverage requires that both decision and condition coverage be satisfied. However, for safety-critical applications (such as avionics software) it is often required that modified condition/decision coverage (MC/DC) be satisfied. This criterion extends condition/decision criteria with requirements that each condition should affect the decision outcome independently.",
"title": "Coverage criteria"
},
{
"paragraph_id": 12,
"text": "For example, consider the following code:",
"title": "Coverage criteria"
},
{
"paragraph_id": 13,
"text": "The condition/decision criteria will be satisfied by the following set of tests:",
"title": "Coverage criteria"
},
{
"paragraph_id": 14,
"text": "However, the above tests set will not satisfy modified condition/decision coverage, since in the first test, the value of 'b' and in the second test the value of 'c' would not influence the output. So, the following test set is needed to satisfy MC/DC:",
"title": "Coverage criteria"
},
{
"paragraph_id": 15,
"text": "This criterion requires that all combinations of conditions inside each decision are tested. For example, the code fragment from the previous section will require eight tests:",
"title": "Coverage criteria"
},
{
"paragraph_id": 16,
"text": "Parameter value coverage (PVC) requires that in a method taking parameters, all the common values for such parameters be considered. The idea is that all common possible values for a parameter are tested. For example, common values for a string are: 1) null, 2) empty, 3) whitespace (space, tabs, newline), 4) valid string, 5) invalid string, 6) single-byte string, 7) double-byte string. It may also be appropriate to use very long strings. Failure to test each possible parameter value may result in a bug. Testing only one of these could result in 100% code coverage as each line is covered, but as only one of seven options are tested, there is only 14.2% PVC.",
"title": "Coverage criteria"
},
{
"paragraph_id": 17,
"text": "There are further coverage criteria, which are used less often:",
"title": "Coverage criteria"
},
{
"paragraph_id": 18,
"text": "Safety-critical or dependable applications are often required to demonstrate 100% of some form of test coverage. For example, the ECSS-E-ST-40C standard demands 100% statement and decision coverage for two out of four different criticality levels; for the other ones, target coverage values are up to negotiation between supplier and customer. However, setting specific target values - and, in particular, 100% - has been criticized by practitioners for various reasons (cf.) Martin Fowler writes: \"I would be suspicious of anything like 100% - it would smell of someone writing tests to make the coverage numbers happy, but not thinking about what they are doing\".",
"title": "Coverage criteria"
},
{
"paragraph_id": 19,
"text": "Some of the coverage criteria above are connected. For instance, path coverage implies decision, statement and entry/exit coverage. Decision coverage implies statement coverage, because every statement is part of a branch.",
"title": "Coverage criteria"
},
{
"paragraph_id": 20,
"text": "Full path coverage, of the type described above, is usually impractical or impossible. Any module with a succession of n {\\displaystyle n} decisions in it can have up to 2 n {\\displaystyle 2^{n}} paths within it; loop constructs can result in an infinite number of paths. Many paths may also be infeasible, in that there is no input to the program under test that can cause that particular path to be executed. However, a general-purpose algorithm for identifying infeasible paths has been proven to be impossible (such an algorithm could be used to solve the halting problem). Basis path testing is for instance a method of achieving complete branch coverage without achieving complete path coverage.",
"title": "Coverage criteria"
},
{
"paragraph_id": 21,
"text": "Methods for practical path coverage testing instead attempt to identify classes of code paths that differ only in the number of loop executions, and to achieve \"basis path\" coverage the tester must cover all the path classes.",
"title": "Coverage criteria"
},
{
"paragraph_id": 22,
"text": "The target software is built with special options or libraries and run under a controlled environment, to map every executed function to the function points in the source code. This allows testing parts of the target software that are rarely or never accessed under normal conditions, and helps reassure that the most important conditions (function points) have been tested. The resulting output is then analyzed to see what areas of code have not been exercised and the tests are updated to include these areas as necessary. Combined with other test coverage methods, the aim is to develop a rigorous, yet manageable, set of regression tests.",
"title": "In practice"
},
{
"paragraph_id": 23,
"text": "In implementing test coverage policies within a software development environment, one must consider the following:",
"title": "In practice"
},
{
"paragraph_id": 24,
"text": "Software authors can look at test coverage results to devise additional tests and input or configuration sets to increase the coverage over vital functions. Two common forms of test coverage are statement (or line) coverage and branch (or edge) coverage. Line coverage reports on the execution footprint of testing in terms of which lines of code were executed to complete the test. Edge coverage reports which branches or code decision points were executed to complete the test. They both report a coverage metric, measured as a percentage. The meaning of this depends on what form(s) of coverage have been used, as 67% branch coverage is more comprehensive than 67% statement coverage.",
"title": "In practice"
},
{
"paragraph_id": 25,
"text": "Generally, test coverage tools incur computation and logging in addition to the actual program thereby slowing down the application, so typically this analysis is not done in production. As one might expect, there are classes of software that cannot be feasibly subjected to these coverage tests, though a degree of coverage mapping can be approximated through analysis rather than direct testing.",
"title": "In practice"
},
{
"paragraph_id": 26,
"text": "There are also some sorts of defects which are affected by such tools. In particular, some race conditions or similar real time sensitive operations can be masked when run under test environments; though conversely, some of these defects may become easier to find as a result of the additional overhead of the testing code.",
"title": "In practice"
},
{
"paragraph_id": 27,
"text": "Most professional software developers use C1 and C2 coverage. C1 stands for statement coverage and C2 for branch or condition coverage. With a combination of C1 and C2, it is possible to cover most statements in a code base. Statement coverage would also cover function coverage with entry and exit, loop, path, state flow, control flow and data flow coverage. With these methods, it is possible to achieve nearly 100% code coverage in most software projects.",
"title": "In practice"
},
{
"paragraph_id": 28,
"text": "Test coverage is one consideration in the safety certification of avionics equipment. The guidelines by which avionics gear is certified by the Federal Aviation Administration (FAA) is documented in DO-178B and DO-178C.",
"title": "Usage in industry"
},
{
"paragraph_id": 29,
"text": "Test coverage is also a requirement in part 6 of the automotive safety standard ISO 26262 Road Vehicles - Functional Safety.",
"title": "Usage in industry"
}
] | In software engineering, code coverage is a percentage measure of the degree to which the source code of a program is executed when a particular test suite is run. A program with high test coverage has more of its source code executed during testing, which suggests it has a lower chance of containing undetected software bugs compared to a program with low test coverage. Many different metrics can be used to calculate test coverage. Some of the most basic are the percentage of program subroutines and the percentage of program statements called during execution of the test suite. Test coverage was among the first methods invented for systematic software testing. The first published reference was by Miller and Maloney in Communications of the ACM, in 1963. | 2001-11-07T04:20:24Z | 2023-09-22T19:35:05Z | [
"Template:More citations needed section",
"Template:Cite book",
"Template:Cite web",
"Template:Short description",
"Template:Program execution",
"Template:Main",
"Template:Citation needed",
"Template:Clarify",
"Template:Cite journal",
"Template:Snd",
"Template:Reflist",
"Template:ISBN"
] | https://en.wikipedia.org/wiki/Code_coverage |
7,033 | Caitlin Clarke | Caitlin Clarke (born Katherine Anne Clarke, May 3, 1952 – September 9, 2004) was an American theater and film actress best known for her role as Valerian in the 1981 fantasy film Dragonslayer and for her role as Charlotte Cardoza in the 1998–1999 Broadway musical Titanic.
Clarke was born Catherine Ann Clarke in Pittsburgh, the oldest of five sisters, the youngest of whom is Victoria Clarke. Her family moved to Sewickley when she was ten.
Clarke received her B.A. in theater arts from Mount Holyoke College in 1974 and her M.F.A. from the Yale School of Drama in 1978. During her final year at Yale, Clarke performed with the Yale Repertory Theater in such plays as Tales from the Vienna Woods.
The first few years of Clarke's professional career were largely theatrical, apart from her role in Dragonslayer. After appearing in three Broadway plays in 1985, Clarke moved to Los Angeles for several years as a film and television actress. She appeared in the 1986 film Crocodile Dundee as Simone, a friendly prostitute. She returned to theater in the early 1990s, and to Broadway as Charlotte Cardoza in Titanic.
Clarke was diagnosed with ovarian cancer in 2000. She returned to Pittsburgh to teach theater at the University of Pittsburgh and at the Pittsburgh Musical Theater's Rauh Conservatory as well as to perform in Pittsburgh theatre until her death on September 9, 2004.
Series: Northern Exposure, The Equalizer, Once a Hero, Moonlighting, Sex And The City, Law & Order ("Menace", "Juvenile", "Stiff"), Matlock ("The Witness").
Movies: Mayflower Madam (1986), Love, Lies and Murder (1991), The Stepford Husbands (1996). | [
{
"paragraph_id": 0,
"text": "Caitlin Clarke (born Katherine Anne Clarke, May 3, 1952 – September 9, 2004) was an American theater and film actress best known for her role as Valerian in the 1981 fantasy film Dragonslayer and for her role as Charlotte Cardoza in the 1998–1999 Broadway musical Titanic.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Clarke was born Catherine Ann Clarke in Pittsburgh, the oldest of five sisters, the youngest of whom is Victoria Clarke. Her family moved to Sewickley when she was ten.",
"title": "Biography"
},
{
"paragraph_id": 2,
"text": "Clarke received her B.A. in theater arts from Mount Holyoke College in 1974 and her M.F.A. from the Yale School of Drama in 1978. During her final year at Yale, Clarke performed with the Yale Repertory Theater in such plays as Tales from the Vienna Woods.",
"title": "Biography"
},
{
"paragraph_id": 3,
"text": "The first few years of Clarke's professional career were largely theatrical, apart from her role in Dragonslayer. After appearing in three Broadway plays in 1985, Clarke moved to Los Angeles for several years as a film and television actress. She appeared in the 1986 film Crocodile Dundee as Simone, a friendly prostitute. She returned to theater in the early 1990s, and to Broadway as Charlotte Cardoza in Titanic.",
"title": "Biography"
},
{
"paragraph_id": 4,
"text": "Clarke was diagnosed with ovarian cancer in 2000. She returned to Pittsburgh to teach theater at the University of Pittsburgh and at the Pittsburgh Musical Theater's Rauh Conservatory as well as to perform in Pittsburgh theatre until her death on September 9, 2004.",
"title": "Biography"
},
{
"paragraph_id": 5,
"text": "Series: Northern Exposure, The Equalizer, Once a Hero, Moonlighting, Sex And The City, Law & Order (\"Menace\", \"Juvenile\", \"Stiff\"), Matlock (\"The Witness\").",
"title": "Television"
},
{
"paragraph_id": 6,
"text": "Movies: Mayflower Madam (1986), Love, Lies and Murder (1991), The Stepford Husbands (1996).",
"title": "Television"
}
] | Caitlin Clarke was an American theater and film actress best known for her role as Valerian in the 1981 fantasy film Dragonslayer and for her role as Charlotte Cardoza in the 1998–1999 Broadway musical Titanic. | 2001-11-07T01:40:04Z | 2023-12-24T01:41:59Z | [
"Template:About",
"Template:Short description",
"Template:Use mdy dates",
"Template:Infobox person",
"Template:Citation needed",
"Template:Reflist",
"Template:Cite news",
"Template:Use American English",
"Template:More citations needed",
"Template:Cite web",
"Template:IMDb name",
"Template:Find a Grave",
"Template:Authority control"
] | https://en.wikipedia.org/wiki/Caitlin_Clarke |
7,034 | Cruiser | A cruiser is a type of warship. Modern cruisers are generally the largest ships in a fleet after aircraft carriers and amphibious assault ships, and can usually perform several roles.
The term "cruiser", which has been in use for several hundred years, has changed its meaning over time. During the Age of Sail, the term cruising referred to certain kinds of missions—independent scouting, commerce protection, or raiding—usually fulfilled by frigates or sloops-of-war, which functioned as the cruising warships of a fleet.
In the middle of the 19th century, cruiser came to be a classification of the ships intended for cruising distant waters, for commerce raiding, and for scouting for the battle fleet. Cruisers came in a wide variety of sizes, from the medium-sized protected cruiser to large armored cruisers that were nearly as big (although not as powerful or as well-armored) as a pre-dreadnought battleship. With the advent of the dreadnought battleship before World War I, the armored cruiser evolved into a vessel of similar scale known as the battlecruiser. The very large battlecruisers of the World War I era that succeeded armored cruisers were now classified, along with dreadnought battleships, as capital ships.
By the early 20th century, after World War I, the direct successors to protected cruisers could be placed on a consistent scale of warship size, smaller than a battleship but larger than a destroyer. In 1922, the Washington Naval Treaty placed a formal limit on these cruisers, which were defined as warships of up to 10,000 tons displacement carrying guns no larger than 8 inches in calibre; whilst the 1930 London Naval Treaty created a divide of two cruiser types, heavy cruisers having 6.1 inches to 8 inch guns, while those with guns of 6.1 inches or less were light cruisers. Each type were limited in total and individual tonnage which shaped cruiser design until the collapse of the treaty system just prior to the start of World War II. Some variations on the Treaty cruiser design included the German Deutschland-class "pocket battleships", which had heavier armament at the expense of speed compared to standard heavy cruisers, and the American Alaska class, which was a scaled-up heavy cruiser design designated as a "cruiser-killer".
In the later 20th century, the obsolescence of the battleship left the cruiser as the largest and most powerful surface combatant ships (aircraft carriers not being considered surface combatants, as their attack capability comes from their air wings rather than on-board weapons). The role of the cruiser varied according to ship and navy, often including air defense and shore bombardment. During the Cold War the Soviet Navy's cruisers had heavy anti-ship missile armament designed to sink NATO carrier task-forces via saturation attack. The U.S. Navy built guided-missile cruisers upon destroyer-style hulls (some called "destroyer leaders" or "frigates" prior to the 1975 reclassification) primarily designed to provide air defense while often adding anti-submarine capabilities, being larger and having longer-range surface-to-air missiles (SAMs) than early Charles F. Adams guided-missile destroyers tasked with the short-range air defense role. By the end of the Cold War the line between cruisers and destroyers had blurred, with the Ticonderoga-class cruiser using the hull of the Spruance-class destroyer but receiving the cruiser designation due to their enhanced mission and combat systems.
As of 2023, only three countries operate active duty vessels formally classed as cruisers: the United States, Russia and Italy. These cruisers are primarily armed with guided missiles, with the exceptions of the aircraft cruisers Admiral Kuznetsov and Giuseppe Garibaldi. BAP Almirante Grau was the last gun cruiser in service, serving with the Peruvian Navy until 2017.
Nevertheless, other classes in addition to the above may be considered cruisers due to differing classification systems. The US/NATO system includes the Type 055 from China and the Kirov and Slava from Russia. International Institute for Strategic Studies' "The Military Balance" defines a cruiser as a surface combatant displacing at least 9750 tonnes; with respect to vessels in service as of the early 2020s it includes the Type 055, the Sejong the Great from South Korea, the Atago and Maya from Japan and the Ticonderoga and Zumwalt from the US.
The term "cruiser" or "cruizer" was first commonly used in the 17th century to refer to an independent warship. "Cruiser" meant the purpose or mission of a ship, rather than a category of vessel. However, the term was nonetheless used to mean a smaller, faster warship suitable for such a role. In the 17th century, the ship of the line was generally too large, inflexible, and expensive to be dispatched on long-range missions (for instance, to the Americas), and too strategically important to be put at risk of fouling and foundering by continual patrol duties.
The Dutch navy was noted for its cruisers in the 17th century, while the Royal Navy—and later French and Spanish navies—subsequently caught up in terms of their numbers and deployment. The British Cruiser and Convoy Acts were an attempt by mercantile interests in Parliament to focus the Navy on commerce defence and raiding with cruisers, rather than the more scarce and expensive ships of the line. During the 18th century the frigate became the preeminent type of cruiser. A frigate was a small, fast, long range, lightly armed (single gun-deck) ship used for scouting, carrying dispatches, and disrupting enemy trade. The other principal type of cruiser was the sloop, but many other miscellaneous types of ship were used as well.
During the 19th century, navies began to use steam power for their fleets. The 1840s saw the construction of experimental steam-powered frigates and sloops. By the middle of the 1850s, the British and U.S. Navies were both building steam frigates with very long hulls and a heavy gun armament, for instance USS Merrimack or Mersey.
The 1860s saw the introduction of the ironclad. The first ironclads were frigates, in the sense of having one gun deck; however, they were also clearly the most powerful ships in the navy, and were principally to serve in the line of battle. In spite of their great speed, they would have been wasted in a cruising role.
The French constructed a number of smaller ironclads for overseas cruising duties, starting with the Belliqueuse, commissioned 1865. These "station ironclads" were the beginning of the development of the armored cruisers, a type of ironclad specifically for the traditional cruiser missions of fast, independent raiding and patrol.
The first true armored cruiser was the Russian General-Admiral, completed in 1874, and followed by the British Shannon a few years later.
Until the 1890s armored cruisers were still built with masts for a full sailing rig, to enable them to operate far from friendly coaling stations.
Unarmored cruising warships, built out of wood, iron, steel or a combination of those materials, remained popular until towards the end of the 19th century. The ironclad's armor often meant that they were limited to short range under steam, and many ironclads were unsuited to long-range missions or for work in distant colonies. The unarmored cruiser—often a screw sloop or screw frigate—could continue in this role. Even though mid- to late-19th century cruisers typically carried up-to-date guns firing explosive shells, they were unable to face ironclads in combat. This was evidenced by the clash between HMS Shah, a modern British cruiser, and the Peruvian monitor Huáscar. Even though the Peruvian vessel was obsolete by the time of the encounter, it stood up well to roughly 50 hits from British shells.
In the 1880s, naval engineers began to use steel as a material for construction and armament. A steel cruiser could be lighter and faster than one built of iron or wood. The Jeune Ecole school of naval doctrine suggested that a fleet of fast unprotected steel cruisers were ideal for commerce raiding, while the torpedo boat would be able to destroy an enemy battleship fleet.
Steel also offered the cruiser a way of acquiring the protection needed to survive in combat. Steel armor was considerably stronger, for the same weight, than iron. By putting a relatively thin layer of steel armor above the vital parts of the ship, and by placing the coal bunkers where they might stop shellfire, a useful degree of protection could be achieved without slowing the ship too much. Protected cruisers generally had an armored deck with sloped sides, providing similar protection to a light armored belt at less weight and expense.
The first protected cruiser was the Chilean ship Esmeralda, launched in 1883. Produced by a shipyard at Elswick, in Britain, owned by Armstrong, she inspired a group of protected cruisers produced in the same yard and known as the "Elswick cruisers". Her forecastle, poop deck and the wooden board deck had been removed, replaced with an armored deck.
Esmeralda's armament consisted of fore and aft 10-inch (25.4 cm) guns and 6-inch (15.2 cm) guns in the midships positions. It could reach a speed of 18 knots (33 km/h), and was propelled by steam alone. It also had a displacement of less than 3,000 tons. During the two following decades, this cruiser type came to be the inspiration for combining heavy artillery, high speed and low displacement.
The torpedo cruiser (known in the Royal Navy as the torpedo gunboat) was a smaller unarmored cruiser, which emerged in the 1880s–1890s. These ships could reach speeds up to 20 knots (37 km/h) and were armed with medium to small calibre guns as well as torpedoes. These ships were tasked with guard and reconnaissance duties, to repeat signals and all other fleet duties for which smaller vessels were suited. These ships could also function as flagships of torpedo boat flotillas. After the 1900s, these ships were usually traded for faster ships with better sea going qualities.
Steel also affected the construction and role of armored cruisers. Steel meant that new designs of battleship, later known as pre-dreadnought battleships, would be able to combine firepower and armor with better endurance and speed than ever before. The armored cruisers of the 1890s and early 1900s greatly resembled the battleships of the day; they tended to carry slightly smaller main armament (7.5-to-10-inch (190 to 250 mm) rather than 12-inch) and have somewhat thinner armor in exchange for a faster speed (perhaps 21 to 23 knots (39 to 43 km/h) rather than 18). Because of their similarity, the lines between battleships and armored cruisers became blurred.
Shortly after the turn of the 20th century there were difficult questions about the design of future cruisers. Modern armored cruisers, almost as powerful as battleships, were also fast enough to outrun older protected and unarmored cruisers. In the Royal Navy, Jackie Fisher cut back hugely on older vessels, including many cruisers of different sorts, calling them "a miser's hoard of useless junk" that any modern cruiser would sweep from the seas. The scout cruiser also appeared in this era; this was a small, fast, lightly armed and armored type designed primarily for reconnaissance. The Royal Navy and the Italian Navy were the primary developers of this type.
The growing size and power of the armored cruiser resulted in the battlecruiser, with an armament and size similar to the revolutionary new dreadnought battleship; the brainchild of British admiral Jackie Fisher. He believed that to ensure British naval dominance in its overseas colonial possessions, a fleet of large, fast, powerfully armed vessels which would be able to hunt down and mop up enemy cruisers and armored cruisers with overwhelming fire superiority was needed. They were equipped with the same gun types as battleships, though usually with fewer guns, and were intended to engage enemy capital ships as well. This type of vessel came to be known as the battlecruiser, and the first were commissioned into the Royal Navy in 1907. The British battlecruisers sacrificed protection for speed, as they were intended to "choose their range" (to the enemy) with superior speed and only engage the enemy at long range. When engaged at moderate ranges, the lack of protection combined with unsafe ammunition handling practices became tragic with the loss of three of them at the Battle of Jutland. Germany and eventually Japan followed suit to build these vessels, replacing armored cruisers in most frontline roles. German battlecruisers were generally better protected but slower than British battlecruisers. Battlecruisers were in many cases larger and more expensive than contemporary battleships, due to their much larger propulsion plants.
At around the same time as the battlecruiser was developed, the distinction between the armored and the unarmored cruiser finally disappeared. By the British Town class, the first of which was launched in 1909, it was possible for a small, fast cruiser to carry both belt and deck armor, particularly when turbine engines were adopted. These light armored cruisers began to occupy the traditional cruiser role once it became clear that the battlecruiser squadrons were required to operate with the battle fleet.
Some light cruisers were built specifically to act as the leaders of flotillas of destroyers.
These vessels were essentially large coastal patrol boats armed with multiple light guns. One such warship was Grivița of the Romanian Navy. She displaced 110 tons, measured 60 meters in length and was armed with four light guns.
The auxiliary cruiser was a merchant ship hastily armed with small guns on the outbreak of war. Auxiliary cruisers were used to fill gaps in their long-range lines or provide escort for other cargo ships, although they generally proved to be useless in this role because of their low speed, feeble firepower and lack of armor. In both world wars the Germans also used small merchant ships armed with cruiser guns to surprise Allied merchant ships.
Some large liners were armed in the same way. In British service these were known as Armed Merchant Cruisers (AMC). The Germans and French used them in World War I as raiders because of their high speed (around 30 knots (56 km/h)), and they were used again as raiders early in World War II by the Germans and Japanese. In both the First World War and in the early part of the Second, they were used as convoy escorts by the British.
Cruisers were one of the workhorse types of warship during World War I. By the time of World War I, cruisers had accelerated their development and improved their quality significantly, with drainage volume reaching 3000–4000 tons, a speed of 25–30 knots and a calibre of 127–152 mm.
Naval construction in the 1920s and 1930s was limited by international treaties designed to prevent the repetition of the Dreadnought arms race of the early 20th century. The Washington Naval Treaty of 1922 placed limits on the construction of ships with a standard displacement of more than 10,000 tons and an armament of guns larger than 8-inch (203 mm). A number of navies commissioned classes of cruisers at the top end of this limit, known as "treaty cruisers".
The London Naval Treaty in 1930 then formalised the distinction between these "heavy" cruisers and light cruisers: a "heavy" cruiser was one with guns of more than 6.1-inch (155 mm) calibre. The Second London Naval Treaty attempted to reduce the tonnage of new cruisers to 8,000 or less, but this had little effect; Japan and Germany were not signatories, and some navies had already begun to evade treaty limitations on warships. The first London treaty did touch off a period of the major powers building 6-inch or 6.1-inch gunned cruisers, nominally of 10,000 tons and with up to fifteen guns, the treaty limit. Thus, most light cruisers ordered after 1930 were the size of heavy cruisers but with more and smaller guns. The Imperial Japanese Navy began this new race with the Mogami class, launched in 1934. After building smaller light cruisers with six or eight 6-inch guns launched 1931–35, the British Royal Navy followed with the 12-gun Southampton class in 1936. To match foreign developments and potential treaty violations, in the 1930s the US developed a series of new guns firing "super-heavy" armor piercing ammunition; these included the 6-inch (152 mm)/47 caliber gun Mark 16 introduced with the 15-gun Brooklyn-class cruisers in 1936, and the 8-inch (203 mm)/55 caliber gun Mark 12 introduced with USS Wichita in 1937.
The heavy cruiser was a type of cruiser designed for long range, high speed and an armament of naval guns around 203 mm (8 in) in calibre. The first heavy cruisers were built in 1915, although it only became a widespread classification following the London Naval Treaty in 1930. The heavy cruiser's immediate precursors were the light cruiser designs of the 1910s and 1920s; the US lightly armored 8-inch "treaty cruisers" of the 1920s (built under the Washington Naval Treaty) were originally classed as light cruisers until the London Treaty forced their redesignation.
Initially, all cruisers built under the Washington treaty had torpedo tubes, regardless of nationality. However, in 1930, results of war games caused the US Naval War College to conclude that only perhaps half of cruisers would use their torpedoes in action. In a surface engagement, long-range gunfire and destroyer torpedoes would decide the issue, and under air attack numerous cruisers would be lost before getting within torpedo range. Thus, beginning with USS New Orleans launched in 1933, new cruisers were built without torpedoes, and torpedoes were removed from older heavy cruisers due to the perceived hazard of their being exploded by shell fire. The Japanese took exactly the opposite approach with cruiser torpedoes, and this proved crucial to their tactical victories in most of the numerous cruiser actions of 1942. Beginning with the Furutaka class launched in 1925, every Japanese heavy cruiser was armed with 24-inch (610 mm) torpedoes, larger than any other cruisers'. By 1933 Japan had developed the Type 93 torpedo for these ships, eventually nicknamed "Long Lance" by the Allies. This type used compressed oxygen instead of compressed air, allowing it to achieve ranges and speeds unmatched by other torpedoes. It could achieve a range of 22,000 metres (24,000 yd) at 50 knots (93 km/h; 58 mph), compared with the US Mark 15 torpedo with 5,500 metres (6,000 yd) at 45 knots (83 km/h; 52 mph). The Mark 15 had a maximum range of 13,500 metres (14,800 yd) at 26.5 knots (49.1 km/h; 30.5 mph), still well below the "Long Lance". The Japanese were able to keep the Type 93's performance and oxygen power secret until the Allies recovered one in early 1943, thus the Allies faced a great threat they were not aware of in 1942. The Type 93 was also fitted to Japanese post-1930 light cruisers and the majority of their World War II destroyers.
Heavy cruisers continued in use until after World War II, with some converted to guided-missile cruisers for air defense or strategic attack and some used for shore bombardment by the United States in the Korean War and the Vietnam War.
The German Deutschland class was a series of three Panzerschiffe ("armored ships"), a form of heavily armed cruiser, designed and built by the German Reichsmarine in nominal accordance with restrictions imposed by the Treaty of Versailles. All three ships were launched between 1931 and 1934, and served with Germany's Kriegsmarine during World War II. Within the Kriegsmarine, the Panzerschiffe had the propaganda value of capital ships: heavy cruisers with battleship guns, torpedoes, and scout aircraft. The similar Swedish Panzerschiffe were tactically used as centers of battlefleets and not as cruisers. They were deployed by Nazi Germany in support of the German interests in the Spanish Civil War. Panzerschiff Admiral Graf Spee represented Germany in the 1937 Coronation Fleet Review.
The British press referred to the vessels as pocket battleships, in reference to the heavy firepower contained in the relatively small vessels; they were considerably smaller than contemporary battleships, though at 28 knots were slower than battlecruisers. At up to 16,000 tons at full load, they were not treaty compliant 10,000 ton cruisers. And although their displacement and scale of armor protection were that of a heavy cruiser, their 280 mm (11 in) main armament was heavier than the 203 mm (8 in) guns of other nations' heavy cruisers, and the latter two members of the class also had tall conning towers resembling battleships. The Panzerschiffe were listed as Ersatz replacements for retiring Reichsmarine coastal defense battleships, which added to their propaganda status in the Kriegsmarine as Ersatz battleships; within the Royal Navy, only battlecruisers HMS Hood, HMS Repulse and HMS Renown were capable of both outrunning and outgunning the Panzerschiffe. They were seen in the 1930s as a new and serious threat by both Britain and France. While the Kriegsmarine reclassified them as heavy cruisers in 1940, Deutschland-class ships continued to be called pocket battleships in the popular press.
The American Alaska class represented the supersized cruiser design. Due to the German pocket battleships, the Scharnhorst class, and rumored Japanese "super cruisers", all of which carried guns larger than the standard heavy cruiser's 8-inch size dictated by naval treaty limitations, the Alaskas were intended to be "cruiser-killers". While superficially appearing similar to a battleship/battlecruiser and mounting three triple turrets of 12-inch guns, their actual protection scheme and design resembled a scaled-up heavy cruiser design. Their hull classification symbol of CB (cruiser, big) reflected this.
A precursor to the anti-aircraft cruiser was the Romanian British-built protected cruiser Elisabeta. After the start of World War I, her four 120 mm main guns were landed and her four 75 mm (12-pounder) secondary guns were modified for anti-aircraft fire.
The development of the anti-aircraft cruiser began in 1935 when the Royal Navy re-armed HMS Coventry and HMS Curlew. Torpedo tubes and 6-inch (152 mm) low-angle guns were removed from these World War I light cruisers and replaced with ten 4-inch (102 mm) high-angle guns, with appropriate fire-control equipment to provide larger warships with protection against high-altitude bombers.
A tactical shortcoming was recognised after completing six additional conversions of C-class cruisers. Having sacrificed anti-ship weapons for anti-aircraft armament, the converted anti-aircraft cruisers might themselves need protection against surface units. New construction was undertaken to create cruisers of similar speed and displacement with dual-purpose guns, which offered good anti-aircraft protection with anti-surface capability for the traditional light cruiser role of defending capital ships from destroyers.
The first purpose built anti-aircraft cruiser was the British Dido class, completed in 1940–42. The US Navy's Atlanta-class cruisers (CLAA: light cruiser with anti-aircraft capability) were designed to match the capabilities of the Royal Navy. Both Dido and Atlanta cruisers initially carried torpedo tubes; the Atlanta cruisers at least were originally designed as destroyer leaders, were originally designated CL (light cruiser), and did not receive the CLAA designation until 1949.
The concept of the quick-firing dual-purpose gun anti-aircraft cruiser was embraced in several designs completed too late to see combat, including: USS Worcester, completed in 1948; USS Roanoke, completed in 1949; two Tre Kronor-class cruisers, completed in 1947; two De Zeven Provinciën-class cruisers, completed in 1953; De Grasse, completed in 1955; Colbert, completed in 1959; and HMS Tiger, HMS Lion and HMS Blake, all completed between 1959 and 1961.
Most post-World War II cruisers were tasked with air defense roles. In the early 1950s, advances in aviation technology forced the move from anti-aircraft artillery to anti-aircraft missiles. Therefore, most modern cruisers are equipped with surface-to-air missiles as their main armament. Today's equivalent of the anti-aircraft cruiser is the guided-missile cruiser (CAG/CLG/CG/CGN).
Cruisers participated in a number of surface engagements in the early part of World War II, along with escorting carrier and battleship groups throughout the war. In the later part of the war, Allied cruisers primarily provided anti-aircraft (AA) escort for carrier groups and performed shore bombardment. Japanese cruisers similarly escorted carrier and battleship groups in the later part of the war, notably in the disastrous Battle of the Philippine Sea and Battle of Leyte Gulf. In 1937–41 the Japanese, having withdrawn from all naval treaties, upgraded or completed the Mogami and Tone classes as heavy cruisers by replacing their 6.1 in (155 mm) triple turrets with 8 in (203 mm) twin turrets. Torpedo refits were also made to most heavy cruisers, resulting in up to sixteen 24 in (610 mm) tubes per ship, plus a set of reloads. In 1941 the 1920s light cruisers Ōi and Kitakami were converted to torpedo cruisers with four 5.5 in (140 mm) guns and forty 24 in (610 mm) torpedo tubes. In 1944 Kitakami was further converted to carry up to eight Kaiten human torpedoes in place of ordinary torpedoes. Before World War II, cruisers were mainly divided into three types: heavy cruisers, light cruisers and auxiliary cruisers. Heavy cruiser tonnage reached 20–30,000 tons, speed 32–34 knots, endurance of more than 10,000 nautical miles, armor thickness of 127–203 mm. Heavy cruisers were equipped with eight or nine 8 in (203 mm) guns with a range of more than 20 nautical miles. They were mainly used to attack enemy surface ships and shore-based targets. In addition, there were 10–16 secondary guns with a caliber of less than 130 mm (5.1 in). Also, dozens of automatic antiaircraft guns were installed to fight aircraft and small vessels such as torpedo boats. For example, in World War II, American Alaska-class cruisers were more than 30,000 tons, equipped with nine 12 in (305 mm) guns. Some cruisers could also carry three or four seaplanes to correct the accuracy of gunfire and perform reconnaissance. Together with battleships, these heavy cruisers formed powerful naval task forces, which dominated the world's oceans for more than a century. After the signing of the Washington Treaty on Arms Limitation in 1922, the tonnage and quantity of battleships, aircraft carriers and cruisers were severely restricted. In order not to violate the treaty, countries began to develop light cruisers. Light cruisers of the 1920s had displacements of less than 10,000 tons and a speed of up to 35 knots. They were equipped with 6–12 main guns with a caliber of 127–133 mm (5–5.5 inches). In addition, they were equipped with 8–12 secondary guns under 127 mm (5 in) and dozens of small caliber cannons, as well as torpedoes and mines. Some ships also carried 2–4 seaplanes, mainly for reconnaissance. In 1930 the London Naval Treaty allowed large light cruisers to be built, with the same tonnage as heavy cruisers and armed with up to fifteen 155 mm (6.1 in) guns. The Japanese Mogami class were built to this treaty's limit, the Americans and British also built similar ships. However, in 1939 the Mogamis were refitted as heavy cruisers with ten 203 mm (8.0 in) guns.
In December 1939, three British cruisers engaged the German "pocket battleship" Admiral Graf Spee (which was on a commerce raiding mission) in the Battle of the River Plate; German cruiser Admiral Graf Spee then took refuge in neutral Montevideo, Uruguay. By broadcasting messages indicating capital ships were in the area, the British caused Admiral Graf Spee's captain to think he faced a hopeless situation while low on ammunition and order his ship scuttled. On 8 June 1940 the German capital ships Scharnhorst and Gneisenau, classed as battleships but with large cruiser armament, sank the aircraft carrier HMS Glorious with gunfire. From October 1940 through March 1941 the German heavy cruiser (also known as "pocket battleship", see above) Admiral Scheer conducted a successful commerce-raiding voyage in the Atlantic and Indian Oceans.
On 27 May 1941, HMS Dorsetshire attempted to finish off the German battleship Bismarck with torpedoes, probably causing the Germans to scuttle the ship. Bismarck (accompanied by the heavy cruiser Prinz Eugen) previously sank the battlecruiser HMS Hood and damaged the battleship HMS Prince of Wales with gunfire in the Battle of the Denmark Strait.
On 19 November 1941 HMAS Sydney sank in a mutually fatal engagement with the German raider Kormoran in the Indian Ocean near Western Australia.
Twenty-three British cruisers were lost to enemy action, mostly to air attack and submarines, in operations in the Atlantic, Mediterranean, and Indian Ocean. Sixteen of these losses were in the Mediterranean. The British included cruisers and anti-aircraft cruisers among convoy escorts in the Mediterranean and to northern Russia due to the threat of surface and air attack. Almost all cruisers in World War II were vulnerable to submarine attack due to a lack of anti-submarine sonar and weapons. Also, until 1943–44 the light anti-aircraft armament of most cruisers was weak.
In July 1942 an attempt to intercept Convoy PQ 17 with surface ships, including the heavy cruiser Admiral Scheer, failed due to multiple German warships grounding, but air and submarine attacks sank 2/3 of the convoy's ships. In August 1942 Admiral Scheer conducted Operation Wunderland, a solo raid into northern Russia's Kara Sea. She bombarded Dikson Island but otherwise had little success.
On 31 December 1942 the Battle of the Barents Sea was fought, a rare action for a Murmansk run because it involved cruisers on both sides. Four British destroyers and five other vessels were escorting Convoy JW 51B from the UK to the Murmansk area. Another British force of two cruisers (HMS Sheffield and HMS Jamaica) and two destroyers were in the area. Two heavy cruisers (one the "pocket battleship" Lützow), accompanied by six destroyers, attempted to intercept the convoy near North Cape after it was spotted by a U-boat. Although the Germans sank a British destroyer and a minesweeper (also damaging another destroyer), they failed to damage any of the convoy's merchant ships. A German destroyer was lost and a heavy cruiser damaged. Both sides withdrew from the action for fear of the other side's torpedoes.
On 26 December 1943 the German capital ship Scharnhorst was sunk while attempting to intercept a convoy in the Battle of the North Cape. The British force that sank her was led by Vice Admiral Bruce Fraser in the battleship HMS Duke of York, accompanied by four cruisers and nine destroyers. One of the cruisers was the preserved HMS Belfast.
Scharnhorst's sister Gneisenau, damaged by a mine and a submerged wreck in the Channel Dash of 13 February 1942 and repaired, was further damaged by a British air attack on 27 February 1942. She began a conversion process to mount six 38 cm (15 in) guns instead of nine 28 cm (11 in) guns, but in early 1943 Hitler (angered by the recent failure at the Battle of the Barents Sea) ordered her disarmed and her armament used as coast defence weapons. One 28 cm triple turret survives near Trondheim, Norway.
The attack on Pearl Harbor on 7 December 1941 brought the United States into the war, but with eight battleships sunk or damaged by air attack. On 10 December 1941 HMS Prince of Wales and the battlecruiser HMS Repulse were sunk by land-based torpedo bombers northeast of Singapore. It was now clear that surface ships could not operate near enemy aircraft in daylight without air cover; most surface actions of 1942–43 were fought at night as a result. Generally, both sides avoided risking their battleships until the Japanese attack at Leyte Gulf in 1944.
Six of the battleships from Pearl Harbor were eventually returned to service, but no US battleships engaged Japanese surface units at sea until the Naval Battle of Guadalcanal in November 1942, and not thereafter until the Battle of Surigao Strait in October 1944. USS North Carolina was on hand for the initial landings at Guadalcanal on 7 August 1942, and escorted carriers in the Battle of the Eastern Solomons later that month. However, on 15 September she was torpedoed while escorting a carrier group and had to return to the US for repairs.
Generally, the Japanese held their capital ships out of all surface actions in the 1941–42 campaigns or they failed to close with the enemy; the Naval Battle of Guadalcanal in November 1942 was the sole exception. The four Kongō-class ships performed shore bombardment in Malaya, Singapore, and Guadalcanal and escorted the raid on Ceylon and other carrier forces in 1941–42. Japanese capital ships also participated ineffectively (due to not being engaged) in the Battle of Midway and the simultaneous Aleutian diversion; in both cases they were in battleship groups well to the rear of the carrier groups. Sources state that Yamato sat out the entire Guadalcanal Campaign due to lack of high-explosive bombardment shells, poor nautical charts of the area, and high fuel consumption. It is likely that the poor charts affected other battleships as well. Except for the Kongō class, most Japanese battleships spent the critical year of 1942, in which most of the war's surface actions occurred, in home waters or at the fortified base of Truk, far from any risk of attacking or being attacked.
From 1942 through mid-1943, US and other Allied cruisers were the heavy units on their side of the numerous surface engagements of the Dutch East Indies campaign, the Guadalcanal Campaign, and subsequent Solomon Islands fighting; they were usually opposed by strong Japanese cruiser-led forces equipped with Long Lance torpedoes. Destroyers also participated heavily on both sides of these battles and provided essentially all the torpedoes on the Allied side, with some battles in these campaigns fought entirely between destroyers.
Along with lack of knowledge of the capabilities of the Long Lance torpedo, the US Navy was hampered by a deficiency it was initially unaware of—the unreliability of the Mark 15 torpedo used by destroyers. This weapon shared the Mark 6 exploder and other problems with the more famously unreliable Mark 14 torpedo; the most common results of firing either of these torpedoes were a dud or a miss. The problems with these weapons were not solved until mid-1943, after almost all of the surface actions in the Solomon Islands had taken place. Another factor that shaped the early surface actions was the pre-war training of both sides. The US Navy concentrated on long-range 8-inch gunfire as their primary offensive weapon, leading to rigid battle line tactics, while the Japanese trained extensively for nighttime torpedo attacks. Since all post-1930 Japanese cruisers had 8-inch guns by 1941, almost all of the US Navy's cruisers in the South Pacific in 1942 were the 8-inch-gunned (203 mm) "treaty cruisers"; most of the 6-inch-gunned (152 mm) cruisers were deployed in the Atlantic.
Although their battleships were held out of surface action, Japanese cruiser-destroyer forces rapidly isolated and mopped up the Allied naval forces in the Dutch East Indies campaign of February–March 1942. In three separate actions, they sank five Allied cruisers (two Dutch and one each British, Australian, and American) with torpedoes and gunfire, against one Japanese cruiser damaged. With one other Allied cruiser withdrawn for repairs, the only remaining Allied cruiser in the area was the damaged USS Marblehead. Despite their rapid success, the Japanese proceeded methodically, never leaving their air cover and rapidly establishing new air bases as they advanced.
After the key carrier battles of the Coral Sea and Midway in mid-1942, Japan had lost four of the six fleet carriers that launched the Pearl Harbor raid and was on the strategic defensive. On 7 August 1942 US Marines were landed on Guadalcanal and other nearby islands, beginning the Guadalcanal Campaign. This campaign proved to be a severe test for the Navy as well as the Marines. Along with two carrier battles, several major surface actions occurred, almost all at night between cruiser-destroyer forces.
Battle of Savo Island On the night of 8–9 August 1942 the Japanese counterattacked near Guadalcanal in the Battle of Savo Island with a cruiser-destroyer force. In a controversial move, the US carrier task forces were withdrawn from the area on the 8th due to heavy fighter losses and low fuel. The Allied force included six heavy cruisers (two Australian), two light cruisers (one Australian), and eight US destroyers. Of the cruisers, only the Australian ships had torpedoes. The Japanese force included five heavy cruisers, two light cruisers, and one destroyer. Numerous circumstances combined to reduce Allied readiness for the battle. The results of the battle were three American heavy cruisers sunk by torpedoes and gunfire, one Australian heavy cruiser disabled by gunfire and scuttled, one heavy cruiser damaged, and two US destroyers damaged. The Japanese had three cruisers lightly damaged. This was the most lopsided outcome of the surface actions in the Solomon Islands. Along with their superior torpedoes, the opening Japanese gunfire was accurate and very damaging. Subsequent analysis showed that some of the damage was due to poor housekeeping practices by US forces. Stowage of boats and aircraft in midships hangars with full gas tanks contributed to fires, along with full and unprotected ready-service ammunition lockers for the open-mount secondary armament. These practices were soon corrected, and US cruisers with similar damage sank less often thereafter. Savo was the first surface action of the war for almost all the US ships and personnel; few US cruisers and destroyers were targeted or hit at Coral Sea or Midway.
Battle of the Eastern Solomons On 24–25 August 1942 the Battle of the Eastern Solomons, a major carrier action, was fought. Part of the action was a Japanese attempt to reinforce Guadalcanal with men and equipment on troop transports. The Japanese troop convoy was attacked by Allied aircraft, resulting in the Japanese subsequently reinforcing Guadalcanal with troops on fast warships at night. These convoys were called the "Tokyo Express" by the Allies. Although the Tokyo Express often ran unopposed, most surface actions in the Solomons revolved around Tokyo Express missions. Also, US air operations had commenced from Henderson Field, the airfield on Guadalcanal. Fear of air power on both sides resulted in all surface actions in the Solomons being fought at night.
Battle of Cape Esperance The Battle of Cape Esperance occurred on the night of 11–12 October 1942. A Tokyo Express mission was underway for Guadalcanal at the same time as a separate cruiser-destroyer bombardment group loaded with high explosive shells for bombarding Henderson Field. A US cruiser-destroyer force was deployed in advance of a convoy of US Army troops for Guadalcanal that was due on 13 October. The Tokyo Express convoy was two seaplane tenders and six destroyers; the bombardment group was three heavy cruisers and two destroyers, and the US force was two heavy cruisers, two light cruisers, and five destroyers. The US force engaged the Japanese bombardment force; the Tokyo Express convoy was able to unload on Guadalcanal and evade action. The bombardment force was sighted at close range (5,000 yards (4,600 m)) and the US force opened fire. The Japanese were surprised because their admiral was anticipating sighting the Tokyo Express force, and withheld fire while attempting to confirm the US ships' identity. One Japanese cruiser and one destroyer were sunk and one cruiser damaged, against one US destroyer sunk with one light cruiser and one destroyer damaged. The bombardment force failed to bring its torpedoes into action, and turned back. The next day US aircraft from Henderson Field attacked several of the Japanese ships, sinking two destroyers and damaging a third. The US victory resulted in overconfidence in some later battles, reflected in the initial after-action report claiming two Japanese heavy cruisers, one light cruiser, and three destroyers sunk by the gunfire of Boise alone. The battle had little effect on the overall situation, as the next night two Kongō-class battleships bombarded and severely damaged Henderson Field unopposed, and the following night another Tokyo Express convoy delivered 4,500 troops to Guadalcanal. The US convoy delivered the Army troops as scheduled on the 13th.
Battle of the Santa Cruz Islands The Battle of the Santa Cruz Islands took place 25–27 October 1942. It was a pivotal battle, as it left the US and Japanese with only two large carriers each in the South Pacific (another large Japanese carrier was damaged and under repair until May 1943). Due to the high carrier attrition rate with no replacements for months, for the most part both sides stopped risking their remaining carriers until late 1943, and each side sent in a pair of battleships instead. The next major carrier operations for the US were the carrier raid on Rabaul and support for the invasion of Tarawa, both in November 1943.
Naval Battle of Guadalcanal The Naval Battle of Guadalcanal occurred 12–15 November 1942 in two phases. A night surface action on 12–13 November was the first phase. The Japanese force consisted of two Kongō-class battleships with high explosive shells for bombarding Henderson Field, one small light cruiser, and 11 destroyers. Their plan was that the bombardment would neutralize Allied airpower and allow a force of 11 transport ships and 12 destroyers to reinforce Guadalcanal with a Japanese division the next day. However, US reconnaissance aircraft spotted the approaching Japanese on the 12th and the Americans made what preparations they could. The American force consisted of two heavy cruisers, one light cruiser, two anti-aircraft cruisers, and eight destroyers. The Americans were outgunned by the Japanese that night, and a lack of pre-battle orders by the US commander led to confusion. The destroyer USS Laffey closed with the battleship Hiei, firing all torpedoes (though apparently none hit or detonated) and raking the battleship's bridge with gunfire, wounding the Japanese admiral and killing his chief of staff. The Americans initially lost four destroyers including Laffey, with both heavy cruisers, most of the remaining destroyers, and both anti-aircraft cruisers damaged. The Japanese initially had one battleship and four destroyers damaged, but at this point they withdrew, possibly unaware that the US force was unable to further oppose them. At dawn US aircraft from Henderson Field, USS Enterprise, and Espiritu Santo found the damaged battleship and two destroyers in the area. The battleship (Hiei) was sunk by aircraft (or possibly scuttled), one destroyer was sunk by the damaged USS Portland, and the other destroyer was attacked by aircraft but was able to withdraw. Both of the damaged US anti-aircraft cruisers were lost on 13 November, one (Juneau) torpedoed by a Japanese submarine, and the other sank on the way to repairs. Juneau's loss was especially tragic; the submarine's presence prevented immediate rescue, over 100 survivors of a crew of nearly 700 were adrift for eight days, and all but ten died. Among the dead were the five Sullivan brothers.
The Japanese transport force was rescheduled for the 14th and a new cruiser-destroyer force (belatedly joined by the surviving battleship Kirishima) was sent to bombard Henderson Field the night of 13 November. Only two cruisers actually bombarded the airfield, as Kirishima had not arrived yet and the remainder of the force was on guard for US warships. The bombardment caused little damage. The cruiser-destroyer force then withdrew, while the transport force continued towards Guadalcanal. Both forces were attacked by US aircraft on the 14th. The cruiser force lost one heavy cruiser sunk and one damaged. Although the transport force had fighter cover from the carrier Jun'yō, six transports were sunk and one heavily damaged. All but four of the destroyers accompanying the transport force picked up survivors and withdrew. The remaining four transports and four destroyers approached Guadalcanal at night, but stopped to await the results of the night's action.
On the night of 14–15 November a Japanese force of Kirishima, two heavy and two light cruisers, and nine destroyers approached Guadalcanal. Two US battleships (Washington and South Dakota) were there to meet them, along with four destroyers. This was one of only two battleship-on-battleship encounters during the Pacific War; the other was the lopsided Battle of Surigao Strait in October 1944, part of the Battle of Leyte Gulf. The battleships had been escorting Enterprise, but were detached due to the urgency of the situation. With nine 16-inch (406 mm) guns apiece against eight 14-inch (356 mm) guns on Kirishima, the Americans had major gun and armor advantages. All four destroyers were sunk or severely damaged and withdrawn shortly after the Japanese attacked them with gunfire and torpedoes. Although her main battery remained in action for most of the battle, South Dakota spent much of the action dealing with major electrical failures that affected her radar, fire control, and radio systems. Although her armor was not penetrated, she was hit by 26 shells of various calibers and temporarily rendered, in a US admiral's words, "deaf, dumb, blind, and impotent". Washington went undetected by the Japanese for most of the battle, but withheld shooting to avoid "friendly fire" until South Dakota was illuminated by Japanese fire, then rapidly set Kirishima ablaze with a jammed rudder and other damage. Washington, finally spotted by the Japanese, then headed for the Russell Islands to hopefully draw the Japanese away from Guadalcanal and South Dakota, and was successful in evading several torpedo attacks. Unusually, only a few Japanese torpedoes scored hits in this engagement. Kirishima sank or was scuttled before the night was out, along with two Japanese destroyers. The remaining Japanese ships withdrew, except for the four transports, which beached themselves in the night and started unloading. However, dawn (and US aircraft, US artillery, and a US destroyer) found them still beached, and they were destroyed.
Battle of Tassafaronga The Battle of Tassafaronga took place on the night of 30 November – 1 December 1942. The US had four heavy cruisers, one light cruiser, and four destroyers. The Japanese had eight destroyers on a Tokyo Express run to deliver food and supplies in drums to Guadalcanal. The Americans achieved initial surprise, damaging one destroyer with gunfire which later sank, but the Japanese torpedo counterattack was devastating. One American heavy cruiser was sunk and three others heavily damaged, with the bows blown off of two of them. It was significant that these two were not lost to Long Lance hits as happened in previous battles; American battle readiness and damage control had improved. Despite defeating the Americans, the Japanese withdrew without delivering the crucial supplies to Guadalcanal. Another attempt on 3 December dropped 1,500 drums of supplies near Guadalcanal, but Allied strafing aircraft sank all but 300 before the Japanese Army could recover them. On 7 December PT boats interrupted a Tokyo Express run, and the following night sank a Japanese supply submarine. The next day the Japanese Navy proposed stopping all destroyer runs to Guadalcanal, but agreed to do just one more. This was on 11 December and was also intercepted by PT boats, which sank a destroyer; only 200 of 1,200 drums dropped off the island were recovered. The next day the Japanese Navy proposed abandoning Guadalcanal; this was approved by the Imperial General Headquarters on 31 December and the Japanese left the island in early February 1943.
After the Japanese abandoned Guadalcanal in February 1943, Allied operations in the Pacific shifted to the New Guinea campaign and isolating Rabaul. The Battle of Kula Gulf was fought on the night of 5–6 July. The US had three light cruisers and four destroyers; the Japanese had ten destroyers loaded with 2,600 troops destined for Vila to oppose a recent US landing on Rendova. Although the Japanese sank a cruiser, they lost two destroyers and were able to deliver only 850 troops. On the night of 12–13 July, the Battle of Kolombangara occurred. The Allies had three light cruisers (one New Zealand) and ten destroyers; the Japanese had one small light cruiser and five destroyers, a Tokyo Express run for Vila. All three Allied cruisers were heavily damaged, with the New Zealand cruiser put out of action for 25 months by a Long Lance hit. The Allies sank only the Japanese light cruiser, and the Japanese landed 1,200 troops at Vila. Despite their tactical victory, this battle caused the Japanese to use a different route in the future, where they were more vulnerable to destroyer and PT boat attacks.
The Battle of Empress Augusta Bay was fought on the night of 1–2 November 1943, immediately after US Marines invaded Bougainville in the Solomon Islands. A Japanese heavy cruiser was damaged by a nighttime air attack shortly before the battle; it is likely that Allied airborne radar had progressed far enough to allow night operations. The Americans had four of the new Cleveland-class cruisers and eight destroyers. The Japanese had two heavy cruisers, two small light cruisers, and six destroyers. Both sides were plagued by collisions, shells that failed to explode, and mutual skill in dodging torpedoes. The Americans suffered significant damage to three destroyers and light damage to a cruiser, but no losses. The Japanese lost one light cruiser and a destroyer, with four other ships damaged. The Japanese withdrew; the Americans pursued them until dawn, then returned to the landing area to provide anti-aircraft cover.
After the Battle of the Santa Cruz Islands in October 1942, both sides were short of large aircraft carriers. The US suspended major carrier operations until sufficient carriers could be completed to destroy the entire Japanese fleet at once should it appear. The Central Pacific carrier raids and amphibious operations commenced in November 1943 with a carrier raid on Rabaul (preceded and followed by Fifth Air Force attacks) and the bloody but successful invasion of Tarawa. The air attacks on Rabaul crippled the Japanese cruiser force, with four heavy and two light cruisers damaged; they were withdrawn to Truk. The US had built up a force in the Central Pacific of six large, five light, and six escort carriers prior to commencing these operations.
From this point on, US cruisers primarily served as anti-aircraft escorts for carriers and in shore bombardment. The only major Japanese carrier operation after Guadalcanal was the disastrous (for Japan) Battle of the Philippine Sea in June 1944, nicknamed the "Marianas Turkey Shoot" by the US Navy.
The Imperial Japanese Navy's last major operation was the Battle of Leyte Gulf, an attempt to dislodge the American invasion of the Philippines in October 1944. The two actions at this battle in which cruisers played a significant role were the Battle off Samar and the Battle of Surigao Strait.
Battle of Surigao Strait The Battle of Surigao Strait was fought on the night of 24–25 October, a few hours before the Battle off Samar. The Japanese had a small battleship group composed of Fusō and Yamashiro, one heavy cruiser, and four destroyers. They were followed at a considerable distance by another small force of two heavy cruisers, a small light cruiser, and four destroyers. Their goal was to head north through Surigao Strait and attack the invasion fleet off Leyte. The Allied force, known as the 7th Fleet Support Force, guarding the strait was overwhelming. It included six battleships (all but one previously damaged in 1941 at Pearl Harbor), four heavy cruisers (one Australian), four light cruisers, and 28 destroyers, plus a force of 39 PT boats. The only advantage to the Japanese was that most of the Allied battleships and cruisers were loaded mainly with high explosive shells, although a significant number of armor-piercing shells were also loaded. The lead Japanese force evaded the PT boats' torpedoes, but were hit hard by the destroyers' torpedoes, losing a battleship. Then they encountered the battleship and cruiser guns. Only one destroyer survived. The engagement is notable for being one of only two occasions in which battleships fired on battleships in the Pacific Theater, the other being the Naval Battle of Guadalcanal. Due to the starting arrangement of the opposing forces, the Allied force was in a "crossing the T" position, so this was the last battle in which this occurred, but it was not a planned maneuver. The following Japanese cruiser force had several problems, including a light cruiser damaged by a PT boat and two heavy cruisers colliding, one of which fell behind and was sunk by air attack the next day. An American veteran of Surigao Strait, USS Phoenix, was transferred to Argentina in 1951 as General Belgrano, becoming most famous for being sunk by HMS Conqueror in the Falklands War on 2 May 1982. She was the first ship sunk by a nuclear submarine outside of accidents, and only the second ship sunk by a submarine since World War II.
Battle off Samar At the Battle off Samar, a Japanese battleship group moving towards the invasion fleet off Leyte engaged a minuscule American force known as "Taffy 3" (formally Task Unit 77.4.3), composed of six escort carriers with about 28 aircraft each, three destroyers, and four destroyer escorts. The biggest guns in the American force were 5 in (127 mm)/38 caliber guns, while the Japanese had 14 in (356 mm), 16 in (406 mm), and 18.1 in (460 mm) guns. Aircraft from six additional escort carriers also participated for a total of around 330 US aircraft, a mix of F6F Hellcat fighters and TBF Avenger torpedo bombers. The Japanese had four battleships including Yamato, six heavy cruisers, two small light cruisers, and 11 destroyers. The Japanese force had earlier been driven off by air attack, losing Yamato's sister Musashi. Admiral Halsey then decided to use his Third Fleet carrier force to attack the Japanese carrier group, located well to the north of Samar, which was actually a decoy group with few aircraft. The Japanese were desperately short of aircraft and pilots at this point in the war, and Leyte Gulf was the first battle in which kamikaze attacks were used. Due to a tragedy of errors, Halsey took the American battleship force with him, leaving San Bernardino Strait guarded only by the small Seventh Fleet escort carrier force. The battle commenced at dawn on 25 October 1944, shortly after the Battle of Surigao Strait. In the engagement that followed, the Americans exhibited uncanny torpedo accuracy, blowing the bows off several Japanese heavy cruisers. The escort carriers' aircraft also performed very well, attacking with machine guns after their carriers ran out of bombs and torpedoes. The unexpected level of damage, and maneuvering to avoid the torpedoes and air attacks, disorganized the Japanese and caused them to think they faced at least part of the Third Fleet's main force. They had also learned of the defeat a few hours before at Surigao Strait, and did not hear that Halsey's force was busy destroying the decoy fleet. Convinced that the rest of the Third Fleet would arrive soon if it hadn't already, the Japanese withdrew, eventually losing three heavy cruisers sunk with three damaged to air and torpedo attacks. The Americans lost two escort carriers, two destroyers, and one destroyer escort sunk, with three escort carriers, one destroyer, and two destroyer escorts damaged, thus losing over one-third of their engaged force sunk with nearly all the remainder damaged.
The US built cruisers in quantity through the end of the war, notably 14 Baltimore-class heavy cruisers and 27 Cleveland-class light cruisers, along with eight Atlanta-class anti-aircraft cruisers. The Cleveland class was the largest cruiser class ever built in number of ships completed, with nine additional Clevelands completed as light aircraft carriers. The large number of cruisers built was probably due to the significant cruiser losses of 1942 in the Pacific theater (seven American and five other Allied) and the perceived need for several cruisers to escort each of the numerous Essex-class aircraft carriers being built. Losing four heavy and two small light cruisers in 1942, the Japanese built only five light cruisers during the war; these were small ships with six 6.1 in (155 mm) guns each. Losing 20 cruisers in 1940–42, the British completed no heavy cruisers, thirteen light cruisers (Fiji and Minotaur classes), and sixteen anti-aircraft cruisers (Dido class) during the war.
The rise of air power during World War II dramatically changed the nature of naval combat. Even the fastest cruisers could not maneuver quickly enough to evade aerial attack, and aircraft now had torpedoes, allowing moderate-range standoff capabilities. This change led to the end of independent operations by single ships or very small task groups, and for the second half of the 20th century naval operations were based on very large fleets believed able to fend off all but the largest air attacks, though this was not tested by any war in that period. The US Navy became centered around carrier groups, with cruisers and battleships primarily providing anti-aircraft defense and shore bombardment. Until the Harpoon missile entered service in the late 1970s, the US Navy was almost entirely dependent on carrier-based aircraft and submarines for conventionally attacking enemy warships. Lacking aircraft carriers, the Soviet Navy depended on anti-ship cruise missiles; in the 1950s these were primarily delivered from heavy land-based bombers. Soviet submarine-launched cruise missiles at the time were primarily for land attack; but by 1964 anti-ship missiles were deployed in quantity on cruisers, destroyers, and submarines.
The US Navy was aware of the potential missile threat as soon as World War II ended, and had considerable related experience due to Japanese kamikaze attacks in that war. The initial response was to upgrade the light AA armament of new cruisers from 40 mm and 20 mm weapons to twin 3-inch (76 mm)/50 caliber gun mounts. For the longer term, it was thought that gun systems would be inadequate to deal with the missile threat, and by the mid-1950s three naval SAM systems were developed: Talos (long range), Terrier (medium range), and Tartar (short range). Talos and Terrier were nuclear-capable and this allowed their use in anti-ship or shore bombardment roles in the event of nuclear war. Chief of Naval Operations Admiral Arleigh Burke is credited with speeding the development of these systems.
Terrier was initially deployed on two converted Baltimore-class cruisers (CAG), with conversions completed in 1955–56. Further conversions of six Cleveland-class cruisers (CLG) (Galveston and Providence classes), redesign of the Farragut class as guided-missile "frigates" (DLG), and development of the Charles F. Adams-class DDGs resulted in the completion of numerous additional guided-missile ships deploying all three systems in 1959–1962. Also completed during this period was the nuclear-powered USS Long Beach, with two Terrier and one Talos launchers, plus an ASROC anti-submarine launcher the World War II conversions lacked. The converted World War II cruisers up to this point retained one or two main battery turrets for shore bombardment. However, in 1962–1964 three additional Baltimore and Oregon City-class cruisers were more extensively converted as the Albany class. These had two Talos and two Tartar launchers plus ASROC and two 5-inch (127 mm) guns for self-defense, and were primarily built to get greater numbers of Talos launchers deployed. Of all these types, only the Farragut DLGs were selected as the design basis for further production, although their Leahy-class successors were significantly larger (5,670 tons standard versus 4,150 tons standard) due to a second Terrier launcher and greater endurance. An economical crew size compared with World War II conversions was probably a factor, as the Leahys required a crew of only 377 versus 1,200 for the Cleveland-class conversions. Through 1980, the ten Farraguts were joined by four additional classes and two one-off ships for a total of 36 guided-missile frigates, eight of them nuclear-powered (DLGN). In 1975 the Farraguts were reclassified as guided-missile destroyers (DDG) due to their small size, and the remaining DLG/DLGN ships became guided-missile cruisers (CG/CGN). The World War II conversions were gradually retired between 1970 and 1980; the Talos missile was withdrawn in 1980 as a cost-saving measure and the Albanys were decommissioned. Long Beach had her Talos launcher removed in a refit shortly thereafter; the deck space was used for Harpoon missiles. Around this time the Terrier ships were upgraded with the RIM-67 Standard ER missile. The guided-missile frigates and cruisers served in the Cold War and the Vietnam War; off Vietnam they performed shore bombardment and shot down enemy aircraft or, as Positive Identification Radar Advisory Zone (PIRAZ) ships, guided fighters to intercept enemy aircraft. By 1995 the former guided-missile frigates were replaced by the Ticonderoga-class cruisers and Arleigh Burke-class destroyers.
The U.S. Navy's guided-missile cruisers were built upon destroyer-style hulls (some called "destroyer leaders" or "frigates" prior to the 1975 reclassification). As the U.S. Navy's strike role was centered around aircraft carriers, cruisers were primarily designed to provide air defense while often adding anti-submarine capabilities. These U.S. cruisers that were built in the 1960s and 1970s were larger, often nuclear-powered for extended endurance in escorting nuclear-powered fleet carriers, and carried longer-range surface-to-air missiles (SAMs) than early Charles F. Adams guided-missile destroyers that were tasked with the short-range air defense role. The U.S. cruiser was a major contrast to their contemporaries, Soviet "rocket cruisers" that were armed with large numbers of anti-ship cruise missiles (ASCMs) as part of the combat doctrine of saturation attack, though in the early 1980s the U.S. Navy retrofitted some of these existing cruisers to carry a small number of Harpoon anti-ship missiles and Tomahawk cruise missiles.
The line between U.S. Navy cruisers and destroyers blurred with the Spruance class. While originally designed for anti-submarine warfare, a Spruance destroyer was comparable in size to existing U.S. cruisers, while having the advantage of an enclosed hangar (with space for up to two medium-lift helicopters) which was a considerable improvement over the basic aviation facilities of earlier cruisers. The Spruance hull design was used as the basis for two classes; the Kidd class which had comparable anti-air capabilities to cruisers at the time, and then the DDG-47-class destroyers which were redesignated as the Ticonderoga-class guided-missile cruisers to emphasize the additional capability provided by the ships' Aegis combat systems, and their flag facilities suitable for an admiral and his staff. In addition, 24 members of the Spruance class were upgraded with the vertical launch system (VLS) for Tomahawk cruise missiles due to its modular hull design, along with the similarly VLS-equipped Ticonderoga class, these ships had anti-surface strike capabilities beyond the 1960s–1970s cruisers that received Tomahawk armored-box launchers as part of the New Threat Upgrade. Like the Ticonderoga ships with VLS, the Arleigh Burke and Zumwalt class, despite being classified as destroyers, actually have much heavier anti-surface armament than previous U.S. ships classified as cruisers.
Prior to the introduction of the Ticonderogas, the US Navy used odd naming conventions that left its fleet seemingly without many cruisers, although a number of their ships were cruisers in all but name. From the 1950s to the 1970s, US Navy cruisers were large vessels equipped with heavy, specialized missiles (mostly surface-to-air, but for several years including the Regulus nuclear cruise missile) for wide-ranging combat against land-based and sea-based targets. All save one—USS Long Beach—were converted from World War II cruisers of the Oregon City, Baltimore and Cleveland classes. Long Beach was also the last cruiser built with a World War II-era cruiser style hull (characterized by a long lean hull); later new-build cruisers were actually converted frigates (DLG/CG USS Bainbridge, USS Truxtun, and the Leahy, Belknap, California, and Virginia classes) or uprated destroyers (the DDG/CG Ticonderoga class was built on a Spruance-class destroyer hull).
Frigates under this scheme were almost as large as the cruisers and optimized for anti-aircraft warfare, although they were capable anti-surface warfare combatants as well. In the late 1960s, the US government perceived a "cruiser gap"—at the time, the US Navy possessed six ships designated as cruisers, compared to 19 for the Soviet Union, even though the USN had 21 ships designated as frigates with equal or superior capabilities to the Soviet cruisers at the time. Because of this, in 1975 the Navy performed a massive redesignation of its forces:
Also, a series of Patrol Frigates of the Oliver Hazard Perry class, originally designated PFG, were redesignated into the FFG line. The cruiser-destroyer-frigate realignment and the deletion of the Ocean Escort type brought the US Navy's ship designations into line with the rest of the world's, eliminating confusion with foreign navies. In 1980, the Navy's then-building DDG-47-class destroyers were redesignated as cruisers (Ticonderoga guided-missile cruisers) to emphasize the additional capability provided by the ships' Aegis combat systems, and their flag facilities suitable for an admiral and his staff.
In the Soviet Navy, cruisers formed the basis of combat groups. In the immediate post-war era it built a fleet of gun-armed light cruisers, but replaced these beginning in the early 1960s with large ships called "rocket cruisers", carrying large numbers of anti-ship cruise missiles (ASCMs) and anti-aircraft missiles. The Soviet combat doctrine of saturation attack meant that their cruisers (as well as destroyers and even missile boats) mounted multiple missiles in large container/launch tube housings and carried far more ASCMs than their NATO counterparts, while NATO combatants instead used individually smaller and lighter missiles (while appearing under-armed when compared to Soviet ships).
In 1962–1965 the four Kynda-class cruisers entered service; these had launchers for eight long-range SS-N-3 Shaddock ASCMs with a full set of reloads; these had a range of up to 450 kilometres (280 mi) with mid-course guidance. The four more modest Kresta I-class cruisers, with launchers for four SS-N-3 ASCMs and no reloads, entered service in 1967–69. In 1969–79 Soviet cruiser numbers more than tripled with ten Kresta II-class cruisers and seven Kara-class cruisers entering service. These had launchers for eight large-diameter missiles whose purpose was initially unclear to NATO. This was the SS-N-14 Silex, an over/under rocket-delivered heavyweight torpedo primarily for the anti-submarine role, but capable of anti-surface action with a range of up to 90 kilometres (56 mi). Soviet doctrine had shifted; powerful anti-submarine vessels (these were designated "Large Anti-Submarine Ships", but were listed as cruisers in most references) were needed to destroy NATO submarines to allow Soviet ballistic missile submarines to get within range of the United States in the event of nuclear war. By this time Long Range Aviation and the Soviet submarine force could deploy numerous ASCMs. Doctrine later shifted back to overwhelming carrier group defenses with ASCMs, with the Slava and Kirov classes.
The most recent Soviet/Russian rocket cruisers, the four Kirov-class battlecruisers, were built in the 1970s and 1980s. One of the Kirov class is in refit, and 2 are being scrapped, with the Pyotr Velikiy in active service. Russia also operates two Slava-class cruisers and one Admiral Kuznetsov-class carrier which is officially designated as a cruiser, specifically a "heavy aviation cruiser" (Russian: тяжелый авианесущий крейсер) due to her complement of 12 P-700 Granit supersonic AShMs.
Currently, the Kirov-class heavy missile cruisers are used for command purposes, as Pyotr Velikiy is the flagship of the Northern Fleet. However, their air defense capabilities are still powerful, as shown by the array of point defense missiles they carry, from 44 OSA-MA missiles to 196 9K311 Tor missiles. For longer range targets, the S-300 is used. For closer range targets, AK-630 or Kashtan CIWSs are used. Aside from that, Kirovs have 20 P-700 Granit missiles for anti-ship warfare. For target acquisition beyond the radar horizon, three helicopters can be used. Besides a vast array of armament, Kirov-class cruisers are also outfitted with many sensors and communications equipment, allowing them to lead the fleet.
The United States Navy has centered on the aircraft carrier since World War II. The Ticonderoga-class cruisers, built in the 1980s, were originally designed and designated as a class of destroyer, intended to provide a very powerful air-defense in these carrier-centered fleets. Outside the US and Soviet navies, new cruisers were rare following World War II. Most navies use guided-missile destroyers for fleet air defense, and destroyers and frigates for cruise missiles. The need to operate in task forces has led most navies to change to fleets designed around ships dedicated to a single role, anti-submarine or anti-aircraft typically, and the large "generalist" ship has disappeared from most forces. The United States Navy, the Russian Navy and the Italian Navy are the only remaining navies which operate active duty cruisers. Italy used Vittorio Veneto until 2003 (decommissioned in 2006) and continues to use Giuseppe Garibaldi as of 2023; France operated a single helicopter cruiser until May 2010, Jeanne d'Arc, for training purposes only. While Type 055 of the Chinese Navy is classified as a cruiser by the U.S. Department of Defense, the Chinese consider it a guided-missile destroyer.
In the years since the launch of Ticonderoga in 1981, the class has received a number of upgrades that have dramatically improved its members' capabilities for anti-submarine and land attack (using the Tomahawk missile). Like their Soviet counterparts, the modern Ticonderogas can also be used as the basis for an entire battle group. Their cruiser designation was almost certainly deserved when first built, as their sensors and combat management systems enable them to act as flagships for a surface warship flotilla if no carrier is present, but newer ships rated as destroyers and also equipped with Aegis approach them very closely in capability, and once more blur the line between the two classes.
If the Ukrainian account of the sinking of the Russian cruiser Moskva is proven correct then it raises questions about the vulnerability of surface ships against cruise missiles. The ship was only hit by two brand new, and virtually untested, R-360 Neptune missiles.
From time to time, some navies have experimented with aircraft-carrying cruisers. One example is the Swedish Gotland. Another was the Japanese Mogami, which was converted to carry a large floatplane group in 1942. Another variant is the helicopter cruiser. The last example in service was the Soviet Navy's Kiev class, whose last unit Admiral Gorshkov was converted to a pure aircraft carrier and sold to India as INS Vikramaditya. The Russian Navy's Admiral Kuznetsov is nominally designated as an aviation cruiser but otherwise resembles a standard medium aircraft carrier, albeit with a surface-to-surface missile battery. The Royal Navy's aircraft-carrying Invincible class and the Italian Navy's aircraft-carrying Giuseppe Garibaldi vessels were originally designated 'through-deck cruisers', but have since been designated as small aircraft carriers (although the 'C' in the pennant for Giuseppe Garibaldi indicates it retains some status as an aircraft-carrying cruiser). Similarly, the Japan Maritime Self-Defense Force's Hyūga-class "helicopter destroyers" are really more along the lines of helicopter cruisers in function and aircraft complement, but due to the Treaty of San Francisco, must be designated as destroyers.
One cruiser alternative studied in the late 1980s by the United States was variously entitled a Mission Essential Unit (MEU) or CG V/STOL. In a return to the thoughts of the independent operations cruiser-carriers of the 1930s and the Soviet Kiev class, the ship was to be fitted with a hangar, elevators, and a flight deck. The mission systems were Aegis, SQS-53 sonar, 12 SV-22 ASW aircraft and 200 VLS cells. The resulting ship would have had a waterline length of 700 feet, a waterline beam of 97 feet, and a displacement of about 25,000 tons. Other features included an integrated electric drive and advanced computer systems, both stand-alone and networked. It was part of the U.S. Navy's "Revolution at Sea" effort. The project was curtailed by the sudden end of the Cold War and its aftermath, otherwise the first of class would have been likely ordered in the early 1990s.
Few cruisers are still operational in the world's navies. Those that remain in service today are:
The following is laid up:
The following are classified as destroyers by their respective operators, but, due to their size and capabilities, are considered to be cruisers by some, all having full load displacements of at least 10,000 tons:
As of 2019, several decommissioned cruisers have been saved from scrapping and exist worldwide as museum ships. They are: | [
{
"paragraph_id": 0,
"text": "A cruiser is a type of warship. Modern cruisers are generally the largest ships in a fleet after aircraft carriers and amphibious assault ships, and can usually perform several roles.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The term \"cruiser\", which has been in use for several hundred years, has changed its meaning over time. During the Age of Sail, the term cruising referred to certain kinds of missions—independent scouting, commerce protection, or raiding—usually fulfilled by frigates or sloops-of-war, which functioned as the cruising warships of a fleet.",
"title": ""
},
{
"paragraph_id": 2,
"text": "In the middle of the 19th century, cruiser came to be a classification of the ships intended for cruising distant waters, for commerce raiding, and for scouting for the battle fleet. Cruisers came in a wide variety of sizes, from the medium-sized protected cruiser to large armored cruisers that were nearly as big (although not as powerful or as well-armored) as a pre-dreadnought battleship. With the advent of the dreadnought battleship before World War I, the armored cruiser evolved into a vessel of similar scale known as the battlecruiser. The very large battlecruisers of the World War I era that succeeded armored cruisers were now classified, along with dreadnought battleships, as capital ships.",
"title": ""
},
{
"paragraph_id": 3,
"text": "By the early 20th century, after World War I, the direct successors to protected cruisers could be placed on a consistent scale of warship size, smaller than a battleship but larger than a destroyer. In 1922, the Washington Naval Treaty placed a formal limit on these cruisers, which were defined as warships of up to 10,000 tons displacement carrying guns no larger than 8 inches in calibre; whilst the 1930 London Naval Treaty created a divide of two cruiser types, heavy cruisers having 6.1 inches to 8 inch guns, while those with guns of 6.1 inches or less were light cruisers. Each type were limited in total and individual tonnage which shaped cruiser design until the collapse of the treaty system just prior to the start of World War II. Some variations on the Treaty cruiser design included the German Deutschland-class \"pocket battleships\", which had heavier armament at the expense of speed compared to standard heavy cruisers, and the American Alaska class, which was a scaled-up heavy cruiser design designated as a \"cruiser-killer\".",
"title": ""
},
{
"paragraph_id": 4,
"text": "In the later 20th century, the obsolescence of the battleship left the cruiser as the largest and most powerful surface combatant ships (aircraft carriers not being considered surface combatants, as their attack capability comes from their air wings rather than on-board weapons). The role of the cruiser varied according to ship and navy, often including air defense and shore bombardment. During the Cold War the Soviet Navy's cruisers had heavy anti-ship missile armament designed to sink NATO carrier task-forces via saturation attack. The U.S. Navy built guided-missile cruisers upon destroyer-style hulls (some called \"destroyer leaders\" or \"frigates\" prior to the 1975 reclassification) primarily designed to provide air defense while often adding anti-submarine capabilities, being larger and having longer-range surface-to-air missiles (SAMs) than early Charles F. Adams guided-missile destroyers tasked with the short-range air defense role. By the end of the Cold War the line between cruisers and destroyers had blurred, with the Ticonderoga-class cruiser using the hull of the Spruance-class destroyer but receiving the cruiser designation due to their enhanced mission and combat systems.",
"title": ""
},
{
"paragraph_id": 5,
"text": "As of 2023, only three countries operate active duty vessels formally classed as cruisers: the United States, Russia and Italy. These cruisers are primarily armed with guided missiles, with the exceptions of the aircraft cruisers Admiral Kuznetsov and Giuseppe Garibaldi. BAP Almirante Grau was the last gun cruiser in service, serving with the Peruvian Navy until 2017.",
"title": ""
},
{
"paragraph_id": 6,
"text": "Nevertheless, other classes in addition to the above may be considered cruisers due to differing classification systems. The US/NATO system includes the Type 055 from China and the Kirov and Slava from Russia. International Institute for Strategic Studies' \"The Military Balance\" defines a cruiser as a surface combatant displacing at least 9750 tonnes; with respect to vessels in service as of the early 2020s it includes the Type 055, the Sejong the Great from South Korea, the Atago and Maya from Japan and the Ticonderoga and Zumwalt from the US.",
"title": ""
},
{
"paragraph_id": 7,
"text": "The term \"cruiser\" or \"cruizer\" was first commonly used in the 17th century to refer to an independent warship. \"Cruiser\" meant the purpose or mission of a ship, rather than a category of vessel. However, the term was nonetheless used to mean a smaller, faster warship suitable for such a role. In the 17th century, the ship of the line was generally too large, inflexible, and expensive to be dispatched on long-range missions (for instance, to the Americas), and too strategically important to be put at risk of fouling and foundering by continual patrol duties.",
"title": "Early history"
},
{
"paragraph_id": 8,
"text": "The Dutch navy was noted for its cruisers in the 17th century, while the Royal Navy—and later French and Spanish navies—subsequently caught up in terms of their numbers and deployment. The British Cruiser and Convoy Acts were an attempt by mercantile interests in Parliament to focus the Navy on commerce defence and raiding with cruisers, rather than the more scarce and expensive ships of the line. During the 18th century the frigate became the preeminent type of cruiser. A frigate was a small, fast, long range, lightly armed (single gun-deck) ship used for scouting, carrying dispatches, and disrupting enemy trade. The other principal type of cruiser was the sloop, but many other miscellaneous types of ship were used as well.",
"title": "Early history"
},
{
"paragraph_id": 9,
"text": "During the 19th century, navies began to use steam power for their fleets. The 1840s saw the construction of experimental steam-powered frigates and sloops. By the middle of the 1850s, the British and U.S. Navies were both building steam frigates with very long hulls and a heavy gun armament, for instance USS Merrimack or Mersey.",
"title": "Steam cruisers"
},
{
"paragraph_id": 10,
"text": "The 1860s saw the introduction of the ironclad. The first ironclads were frigates, in the sense of having one gun deck; however, they were also clearly the most powerful ships in the navy, and were principally to serve in the line of battle. In spite of their great speed, they would have been wasted in a cruising role.",
"title": "Steam cruisers"
},
{
"paragraph_id": 11,
"text": "The French constructed a number of smaller ironclads for overseas cruising duties, starting with the Belliqueuse, commissioned 1865. These \"station ironclads\" were the beginning of the development of the armored cruisers, a type of ironclad specifically for the traditional cruiser missions of fast, independent raiding and patrol.",
"title": "Steam cruisers"
},
{
"paragraph_id": 12,
"text": "The first true armored cruiser was the Russian General-Admiral, completed in 1874, and followed by the British Shannon a few years later.",
"title": "Steam cruisers"
},
{
"paragraph_id": 13,
"text": "Until the 1890s armored cruisers were still built with masts for a full sailing rig, to enable them to operate far from friendly coaling stations.",
"title": "Steam cruisers"
},
{
"paragraph_id": 14,
"text": "Unarmored cruising warships, built out of wood, iron, steel or a combination of those materials, remained popular until towards the end of the 19th century. The ironclad's armor often meant that they were limited to short range under steam, and many ironclads were unsuited to long-range missions or for work in distant colonies. The unarmored cruiser—often a screw sloop or screw frigate—could continue in this role. Even though mid- to late-19th century cruisers typically carried up-to-date guns firing explosive shells, they were unable to face ironclads in combat. This was evidenced by the clash between HMS Shah, a modern British cruiser, and the Peruvian monitor Huáscar. Even though the Peruvian vessel was obsolete by the time of the encounter, it stood up well to roughly 50 hits from British shells.",
"title": "Steam cruisers"
},
{
"paragraph_id": 15,
"text": "In the 1880s, naval engineers began to use steel as a material for construction and armament. A steel cruiser could be lighter and faster than one built of iron or wood. The Jeune Ecole school of naval doctrine suggested that a fleet of fast unprotected steel cruisers were ideal for commerce raiding, while the torpedo boat would be able to destroy an enemy battleship fleet.",
"title": "Steel cruisers"
},
{
"paragraph_id": 16,
"text": "Steel also offered the cruiser a way of acquiring the protection needed to survive in combat. Steel armor was considerably stronger, for the same weight, than iron. By putting a relatively thin layer of steel armor above the vital parts of the ship, and by placing the coal bunkers where they might stop shellfire, a useful degree of protection could be achieved without slowing the ship too much. Protected cruisers generally had an armored deck with sloped sides, providing similar protection to a light armored belt at less weight and expense.",
"title": "Steel cruisers"
},
{
"paragraph_id": 17,
"text": "The first protected cruiser was the Chilean ship Esmeralda, launched in 1883. Produced by a shipyard at Elswick, in Britain, owned by Armstrong, she inspired a group of protected cruisers produced in the same yard and known as the \"Elswick cruisers\". Her forecastle, poop deck and the wooden board deck had been removed, replaced with an armored deck.",
"title": "Steel cruisers"
},
{
"paragraph_id": 18,
"text": "Esmeralda's armament consisted of fore and aft 10-inch (25.4 cm) guns and 6-inch (15.2 cm) guns in the midships positions. It could reach a speed of 18 knots (33 km/h), and was propelled by steam alone. It also had a displacement of less than 3,000 tons. During the two following decades, this cruiser type came to be the inspiration for combining heavy artillery, high speed and low displacement.",
"title": "Steel cruisers"
},
{
"paragraph_id": 19,
"text": "The torpedo cruiser (known in the Royal Navy as the torpedo gunboat) was a smaller unarmored cruiser, which emerged in the 1880s–1890s. These ships could reach speeds up to 20 knots (37 km/h) and were armed with medium to small calibre guns as well as torpedoes. These ships were tasked with guard and reconnaissance duties, to repeat signals and all other fleet duties for which smaller vessels were suited. These ships could also function as flagships of torpedo boat flotillas. After the 1900s, these ships were usually traded for faster ships with better sea going qualities.",
"title": "Steel cruisers"
},
{
"paragraph_id": 20,
"text": "Steel also affected the construction and role of armored cruisers. Steel meant that new designs of battleship, later known as pre-dreadnought battleships, would be able to combine firepower and armor with better endurance and speed than ever before. The armored cruisers of the 1890s and early 1900s greatly resembled the battleships of the day; they tended to carry slightly smaller main armament (7.5-to-10-inch (190 to 250 mm) rather than 12-inch) and have somewhat thinner armor in exchange for a faster speed (perhaps 21 to 23 knots (39 to 43 km/h) rather than 18). Because of their similarity, the lines between battleships and armored cruisers became blurred.",
"title": "Steel cruisers"
},
{
"paragraph_id": 21,
"text": "Shortly after the turn of the 20th century there were difficult questions about the design of future cruisers. Modern armored cruisers, almost as powerful as battleships, were also fast enough to outrun older protected and unarmored cruisers. In the Royal Navy, Jackie Fisher cut back hugely on older vessels, including many cruisers of different sorts, calling them \"a miser's hoard of useless junk\" that any modern cruiser would sweep from the seas. The scout cruiser also appeared in this era; this was a small, fast, lightly armed and armored type designed primarily for reconnaissance. The Royal Navy and the Italian Navy were the primary developers of this type.",
"title": "Early 20th century"
},
{
"paragraph_id": 22,
"text": "The growing size and power of the armored cruiser resulted in the battlecruiser, with an armament and size similar to the revolutionary new dreadnought battleship; the brainchild of British admiral Jackie Fisher. He believed that to ensure British naval dominance in its overseas colonial possessions, a fleet of large, fast, powerfully armed vessels which would be able to hunt down and mop up enemy cruisers and armored cruisers with overwhelming fire superiority was needed. They were equipped with the same gun types as battleships, though usually with fewer guns, and were intended to engage enemy capital ships as well. This type of vessel came to be known as the battlecruiser, and the first were commissioned into the Royal Navy in 1907. The British battlecruisers sacrificed protection for speed, as they were intended to \"choose their range\" (to the enemy) with superior speed and only engage the enemy at long range. When engaged at moderate ranges, the lack of protection combined with unsafe ammunition handling practices became tragic with the loss of three of them at the Battle of Jutland. Germany and eventually Japan followed suit to build these vessels, replacing armored cruisers in most frontline roles. German battlecruisers were generally better protected but slower than British battlecruisers. Battlecruisers were in many cases larger and more expensive than contemporary battleships, due to their much larger propulsion plants.",
"title": "Early 20th century"
},
{
"paragraph_id": 23,
"text": "At around the same time as the battlecruiser was developed, the distinction between the armored and the unarmored cruiser finally disappeared. By the British Town class, the first of which was launched in 1909, it was possible for a small, fast cruiser to carry both belt and deck armor, particularly when turbine engines were adopted. These light armored cruisers began to occupy the traditional cruiser role once it became clear that the battlecruiser squadrons were required to operate with the battle fleet.",
"title": "Early 20th century"
},
{
"paragraph_id": 24,
"text": "Some light cruisers were built specifically to act as the leaders of flotillas of destroyers.",
"title": "Early 20th century"
},
{
"paragraph_id": 25,
"text": "These vessels were essentially large coastal patrol boats armed with multiple light guns. One such warship was Grivița of the Romanian Navy. She displaced 110 tons, measured 60 meters in length and was armed with four light guns.",
"title": "Early 20th century"
},
{
"paragraph_id": 26,
"text": "The auxiliary cruiser was a merchant ship hastily armed with small guns on the outbreak of war. Auxiliary cruisers were used to fill gaps in their long-range lines or provide escort for other cargo ships, although they generally proved to be useless in this role because of their low speed, feeble firepower and lack of armor. In both world wars the Germans also used small merchant ships armed with cruiser guns to surprise Allied merchant ships.",
"title": "Early 20th century"
},
{
"paragraph_id": 27,
"text": "Some large liners were armed in the same way. In British service these were known as Armed Merchant Cruisers (AMC). The Germans and French used them in World War I as raiders because of their high speed (around 30 knots (56 km/h)), and they were used again as raiders early in World War II by the Germans and Japanese. In both the First World War and in the early part of the Second, they were used as convoy escorts by the British.",
"title": "Early 20th century"
},
{
"paragraph_id": 28,
"text": "Cruisers were one of the workhorse types of warship during World War I. By the time of World War I, cruisers had accelerated their development and improved their quality significantly, with drainage volume reaching 3000–4000 tons, a speed of 25–30 knots and a calibre of 127–152 mm.",
"title": "Early 20th century"
},
{
"paragraph_id": 29,
"text": "Naval construction in the 1920s and 1930s was limited by international treaties designed to prevent the repetition of the Dreadnought arms race of the early 20th century. The Washington Naval Treaty of 1922 placed limits on the construction of ships with a standard displacement of more than 10,000 tons and an armament of guns larger than 8-inch (203 mm). A number of navies commissioned classes of cruisers at the top end of this limit, known as \"treaty cruisers\".",
"title": "Mid-20th century"
},
{
"paragraph_id": 30,
"text": "The London Naval Treaty in 1930 then formalised the distinction between these \"heavy\" cruisers and light cruisers: a \"heavy\" cruiser was one with guns of more than 6.1-inch (155 mm) calibre. The Second London Naval Treaty attempted to reduce the tonnage of new cruisers to 8,000 or less, but this had little effect; Japan and Germany were not signatories, and some navies had already begun to evade treaty limitations on warships. The first London treaty did touch off a period of the major powers building 6-inch or 6.1-inch gunned cruisers, nominally of 10,000 tons and with up to fifteen guns, the treaty limit. Thus, most light cruisers ordered after 1930 were the size of heavy cruisers but with more and smaller guns. The Imperial Japanese Navy began this new race with the Mogami class, launched in 1934. After building smaller light cruisers with six or eight 6-inch guns launched 1931–35, the British Royal Navy followed with the 12-gun Southampton class in 1936. To match foreign developments and potential treaty violations, in the 1930s the US developed a series of new guns firing \"super-heavy\" armor piercing ammunition; these included the 6-inch (152 mm)/47 caliber gun Mark 16 introduced with the 15-gun Brooklyn-class cruisers in 1936, and the 8-inch (203 mm)/55 caliber gun Mark 12 introduced with USS Wichita in 1937.",
"title": "Mid-20th century"
},
{
"paragraph_id": 31,
"text": "The heavy cruiser was a type of cruiser designed for long range, high speed and an armament of naval guns around 203 mm (8 in) in calibre. The first heavy cruisers were built in 1915, although it only became a widespread classification following the London Naval Treaty in 1930. The heavy cruiser's immediate precursors were the light cruiser designs of the 1910s and 1920s; the US lightly armored 8-inch \"treaty cruisers\" of the 1920s (built under the Washington Naval Treaty) were originally classed as light cruisers until the London Treaty forced their redesignation.",
"title": "Mid-20th century"
},
{
"paragraph_id": 32,
"text": "Initially, all cruisers built under the Washington treaty had torpedo tubes, regardless of nationality. However, in 1930, results of war games caused the US Naval War College to conclude that only perhaps half of cruisers would use their torpedoes in action. In a surface engagement, long-range gunfire and destroyer torpedoes would decide the issue, and under air attack numerous cruisers would be lost before getting within torpedo range. Thus, beginning with USS New Orleans launched in 1933, new cruisers were built without torpedoes, and torpedoes were removed from older heavy cruisers due to the perceived hazard of their being exploded by shell fire. The Japanese took exactly the opposite approach with cruiser torpedoes, and this proved crucial to their tactical victories in most of the numerous cruiser actions of 1942. Beginning with the Furutaka class launched in 1925, every Japanese heavy cruiser was armed with 24-inch (610 mm) torpedoes, larger than any other cruisers'. By 1933 Japan had developed the Type 93 torpedo for these ships, eventually nicknamed \"Long Lance\" by the Allies. This type used compressed oxygen instead of compressed air, allowing it to achieve ranges and speeds unmatched by other torpedoes. It could achieve a range of 22,000 metres (24,000 yd) at 50 knots (93 km/h; 58 mph), compared with the US Mark 15 torpedo with 5,500 metres (6,000 yd) at 45 knots (83 km/h; 52 mph). The Mark 15 had a maximum range of 13,500 metres (14,800 yd) at 26.5 knots (49.1 km/h; 30.5 mph), still well below the \"Long Lance\". The Japanese were able to keep the Type 93's performance and oxygen power secret until the Allies recovered one in early 1943, thus the Allies faced a great threat they were not aware of in 1942. The Type 93 was also fitted to Japanese post-1930 light cruisers and the majority of their World War II destroyers.",
"title": "Mid-20th century"
},
{
"paragraph_id": 33,
"text": "Heavy cruisers continued in use until after World War II, with some converted to guided-missile cruisers for air defense or strategic attack and some used for shore bombardment by the United States in the Korean War and the Vietnam War.",
"title": "Mid-20th century"
},
{
"paragraph_id": 34,
"text": "The German Deutschland class was a series of three Panzerschiffe (\"armored ships\"), a form of heavily armed cruiser, designed and built by the German Reichsmarine in nominal accordance with restrictions imposed by the Treaty of Versailles. All three ships were launched between 1931 and 1934, and served with Germany's Kriegsmarine during World War II. Within the Kriegsmarine, the Panzerschiffe had the propaganda value of capital ships: heavy cruisers with battleship guns, torpedoes, and scout aircraft. The similar Swedish Panzerschiffe were tactically used as centers of battlefleets and not as cruisers. They were deployed by Nazi Germany in support of the German interests in the Spanish Civil War. Panzerschiff Admiral Graf Spee represented Germany in the 1937 Coronation Fleet Review.",
"title": "Mid-20th century"
},
{
"paragraph_id": 35,
"text": "The British press referred to the vessels as pocket battleships, in reference to the heavy firepower contained in the relatively small vessels; they were considerably smaller than contemporary battleships, though at 28 knots were slower than battlecruisers. At up to 16,000 tons at full load, they were not treaty compliant 10,000 ton cruisers. And although their displacement and scale of armor protection were that of a heavy cruiser, their 280 mm (11 in) main armament was heavier than the 203 mm (8 in) guns of other nations' heavy cruisers, and the latter two members of the class also had tall conning towers resembling battleships. The Panzerschiffe were listed as Ersatz replacements for retiring Reichsmarine coastal defense battleships, which added to their propaganda status in the Kriegsmarine as Ersatz battleships; within the Royal Navy, only battlecruisers HMS Hood, HMS Repulse and HMS Renown were capable of both outrunning and outgunning the Panzerschiffe. They were seen in the 1930s as a new and serious threat by both Britain and France. While the Kriegsmarine reclassified them as heavy cruisers in 1940, Deutschland-class ships continued to be called pocket battleships in the popular press.",
"title": "Mid-20th century"
},
{
"paragraph_id": 36,
"text": "The American Alaska class represented the supersized cruiser design. Due to the German pocket battleships, the Scharnhorst class, and rumored Japanese \"super cruisers\", all of which carried guns larger than the standard heavy cruiser's 8-inch size dictated by naval treaty limitations, the Alaskas were intended to be \"cruiser-killers\". While superficially appearing similar to a battleship/battlecruiser and mounting three triple turrets of 12-inch guns, their actual protection scheme and design resembled a scaled-up heavy cruiser design. Their hull classification symbol of CB (cruiser, big) reflected this.",
"title": "Mid-20th century"
},
{
"paragraph_id": 37,
"text": "A precursor to the anti-aircraft cruiser was the Romanian British-built protected cruiser Elisabeta. After the start of World War I, her four 120 mm main guns were landed and her four 75 mm (12-pounder) secondary guns were modified for anti-aircraft fire.",
"title": "Mid-20th century"
},
{
"paragraph_id": 38,
"text": "The development of the anti-aircraft cruiser began in 1935 when the Royal Navy re-armed HMS Coventry and HMS Curlew. Torpedo tubes and 6-inch (152 mm) low-angle guns were removed from these World War I light cruisers and replaced with ten 4-inch (102 mm) high-angle guns, with appropriate fire-control equipment to provide larger warships with protection against high-altitude bombers.",
"title": "Mid-20th century"
},
{
"paragraph_id": 39,
"text": "A tactical shortcoming was recognised after completing six additional conversions of C-class cruisers. Having sacrificed anti-ship weapons for anti-aircraft armament, the converted anti-aircraft cruisers might themselves need protection against surface units. New construction was undertaken to create cruisers of similar speed and displacement with dual-purpose guns, which offered good anti-aircraft protection with anti-surface capability for the traditional light cruiser role of defending capital ships from destroyers.",
"title": "Mid-20th century"
},
{
"paragraph_id": 40,
"text": "The first purpose built anti-aircraft cruiser was the British Dido class, completed in 1940–42. The US Navy's Atlanta-class cruisers (CLAA: light cruiser with anti-aircraft capability) were designed to match the capabilities of the Royal Navy. Both Dido and Atlanta cruisers initially carried torpedo tubes; the Atlanta cruisers at least were originally designed as destroyer leaders, were originally designated CL (light cruiser), and did not receive the CLAA designation until 1949.",
"title": "Mid-20th century"
},
{
"paragraph_id": 41,
"text": "The concept of the quick-firing dual-purpose gun anti-aircraft cruiser was embraced in several designs completed too late to see combat, including: USS Worcester, completed in 1948; USS Roanoke, completed in 1949; two Tre Kronor-class cruisers, completed in 1947; two De Zeven Provinciën-class cruisers, completed in 1953; De Grasse, completed in 1955; Colbert, completed in 1959; and HMS Tiger, HMS Lion and HMS Blake, all completed between 1959 and 1961.",
"title": "Mid-20th century"
},
{
"paragraph_id": 42,
"text": "Most post-World War II cruisers were tasked with air defense roles. In the early 1950s, advances in aviation technology forced the move from anti-aircraft artillery to anti-aircraft missiles. Therefore, most modern cruisers are equipped with surface-to-air missiles as their main armament. Today's equivalent of the anti-aircraft cruiser is the guided-missile cruiser (CAG/CLG/CG/CGN).",
"title": "Mid-20th century"
},
{
"paragraph_id": 43,
"text": "Cruisers participated in a number of surface engagements in the early part of World War II, along with escorting carrier and battleship groups throughout the war. In the later part of the war, Allied cruisers primarily provided anti-aircraft (AA) escort for carrier groups and performed shore bombardment. Japanese cruisers similarly escorted carrier and battleship groups in the later part of the war, notably in the disastrous Battle of the Philippine Sea and Battle of Leyte Gulf. In 1937–41 the Japanese, having withdrawn from all naval treaties, upgraded or completed the Mogami and Tone classes as heavy cruisers by replacing their 6.1 in (155 mm) triple turrets with 8 in (203 mm) twin turrets. Torpedo refits were also made to most heavy cruisers, resulting in up to sixteen 24 in (610 mm) tubes per ship, plus a set of reloads. In 1941 the 1920s light cruisers Ōi and Kitakami were converted to torpedo cruisers with four 5.5 in (140 mm) guns and forty 24 in (610 mm) torpedo tubes. In 1944 Kitakami was further converted to carry up to eight Kaiten human torpedoes in place of ordinary torpedoes. Before World War II, cruisers were mainly divided into three types: heavy cruisers, light cruisers and auxiliary cruisers. Heavy cruiser tonnage reached 20–30,000 tons, speed 32–34 knots, endurance of more than 10,000 nautical miles, armor thickness of 127–203 mm. Heavy cruisers were equipped with eight or nine 8 in (203 mm) guns with a range of more than 20 nautical miles. They were mainly used to attack enemy surface ships and shore-based targets. In addition, there were 10–16 secondary guns with a caliber of less than 130 mm (5.1 in). Also, dozens of automatic antiaircraft guns were installed to fight aircraft and small vessels such as torpedo boats. For example, in World War II, American Alaska-class cruisers were more than 30,000 tons, equipped with nine 12 in (305 mm) guns. Some cruisers could also carry three or four seaplanes to correct the accuracy of gunfire and perform reconnaissance. Together with battleships, these heavy cruisers formed powerful naval task forces, which dominated the world's oceans for more than a century. After the signing of the Washington Treaty on Arms Limitation in 1922, the tonnage and quantity of battleships, aircraft carriers and cruisers were severely restricted. In order not to violate the treaty, countries began to develop light cruisers. Light cruisers of the 1920s had displacements of less than 10,000 tons and a speed of up to 35 knots. They were equipped with 6–12 main guns with a caliber of 127–133 mm (5–5.5 inches). In addition, they were equipped with 8–12 secondary guns under 127 mm (5 in) and dozens of small caliber cannons, as well as torpedoes and mines. Some ships also carried 2–4 seaplanes, mainly for reconnaissance. In 1930 the London Naval Treaty allowed large light cruisers to be built, with the same tonnage as heavy cruisers and armed with up to fifteen 155 mm (6.1 in) guns. The Japanese Mogami class were built to this treaty's limit, the Americans and British also built similar ships. However, in 1939 the Mogamis were refitted as heavy cruisers with ten 203 mm (8.0 in) guns.",
"title": "World War II"
},
{
"paragraph_id": 44,
"text": "In December 1939, three British cruisers engaged the German \"pocket battleship\" Admiral Graf Spee (which was on a commerce raiding mission) in the Battle of the River Plate; German cruiser Admiral Graf Spee then took refuge in neutral Montevideo, Uruguay. By broadcasting messages indicating capital ships were in the area, the British caused Admiral Graf Spee's captain to think he faced a hopeless situation while low on ammunition and order his ship scuttled. On 8 June 1940 the German capital ships Scharnhorst and Gneisenau, classed as battleships but with large cruiser armament, sank the aircraft carrier HMS Glorious with gunfire. From October 1940 through March 1941 the German heavy cruiser (also known as \"pocket battleship\", see above) Admiral Scheer conducted a successful commerce-raiding voyage in the Atlantic and Indian Oceans.",
"title": "World War II"
},
{
"paragraph_id": 45,
"text": "On 27 May 1941, HMS Dorsetshire attempted to finish off the German battleship Bismarck with torpedoes, probably causing the Germans to scuttle the ship. Bismarck (accompanied by the heavy cruiser Prinz Eugen) previously sank the battlecruiser HMS Hood and damaged the battleship HMS Prince of Wales with gunfire in the Battle of the Denmark Strait.",
"title": "World War II"
},
{
"paragraph_id": 46,
"text": "On 19 November 1941 HMAS Sydney sank in a mutually fatal engagement with the German raider Kormoran in the Indian Ocean near Western Australia.",
"title": "World War II"
},
{
"paragraph_id": 47,
"text": "Twenty-three British cruisers were lost to enemy action, mostly to air attack and submarines, in operations in the Atlantic, Mediterranean, and Indian Ocean. Sixteen of these losses were in the Mediterranean. The British included cruisers and anti-aircraft cruisers among convoy escorts in the Mediterranean and to northern Russia due to the threat of surface and air attack. Almost all cruisers in World War II were vulnerable to submarine attack due to a lack of anti-submarine sonar and weapons. Also, until 1943–44 the light anti-aircraft armament of most cruisers was weak.",
"title": "World War II"
},
{
"paragraph_id": 48,
"text": "In July 1942 an attempt to intercept Convoy PQ 17 with surface ships, including the heavy cruiser Admiral Scheer, failed due to multiple German warships grounding, but air and submarine attacks sank 2/3 of the convoy's ships. In August 1942 Admiral Scheer conducted Operation Wunderland, a solo raid into northern Russia's Kara Sea. She bombarded Dikson Island but otherwise had little success.",
"title": "World War II"
},
{
"paragraph_id": 49,
"text": "On 31 December 1942 the Battle of the Barents Sea was fought, a rare action for a Murmansk run because it involved cruisers on both sides. Four British destroyers and five other vessels were escorting Convoy JW 51B from the UK to the Murmansk area. Another British force of two cruisers (HMS Sheffield and HMS Jamaica) and two destroyers were in the area. Two heavy cruisers (one the \"pocket battleship\" Lützow), accompanied by six destroyers, attempted to intercept the convoy near North Cape after it was spotted by a U-boat. Although the Germans sank a British destroyer and a minesweeper (also damaging another destroyer), they failed to damage any of the convoy's merchant ships. A German destroyer was lost and a heavy cruiser damaged. Both sides withdrew from the action for fear of the other side's torpedoes.",
"title": "World War II"
},
{
"paragraph_id": 50,
"text": "On 26 December 1943 the German capital ship Scharnhorst was sunk while attempting to intercept a convoy in the Battle of the North Cape. The British force that sank her was led by Vice Admiral Bruce Fraser in the battleship HMS Duke of York, accompanied by four cruisers and nine destroyers. One of the cruisers was the preserved HMS Belfast.",
"title": "World War II"
},
{
"paragraph_id": 51,
"text": "Scharnhorst's sister Gneisenau, damaged by a mine and a submerged wreck in the Channel Dash of 13 February 1942 and repaired, was further damaged by a British air attack on 27 February 1942. She began a conversion process to mount six 38 cm (15 in) guns instead of nine 28 cm (11 in) guns, but in early 1943 Hitler (angered by the recent failure at the Battle of the Barents Sea) ordered her disarmed and her armament used as coast defence weapons. One 28 cm triple turret survives near Trondheim, Norway.",
"title": "World War II"
},
{
"paragraph_id": 52,
"text": "The attack on Pearl Harbor on 7 December 1941 brought the United States into the war, but with eight battleships sunk or damaged by air attack. On 10 December 1941 HMS Prince of Wales and the battlecruiser HMS Repulse were sunk by land-based torpedo bombers northeast of Singapore. It was now clear that surface ships could not operate near enemy aircraft in daylight without air cover; most surface actions of 1942–43 were fought at night as a result. Generally, both sides avoided risking their battleships until the Japanese attack at Leyte Gulf in 1944.",
"title": "World War II"
},
{
"paragraph_id": 53,
"text": "Six of the battleships from Pearl Harbor were eventually returned to service, but no US battleships engaged Japanese surface units at sea until the Naval Battle of Guadalcanal in November 1942, and not thereafter until the Battle of Surigao Strait in October 1944. USS North Carolina was on hand for the initial landings at Guadalcanal on 7 August 1942, and escorted carriers in the Battle of the Eastern Solomons later that month. However, on 15 September she was torpedoed while escorting a carrier group and had to return to the US for repairs.",
"title": "World War II"
},
{
"paragraph_id": 54,
"text": "Generally, the Japanese held their capital ships out of all surface actions in the 1941–42 campaigns or they failed to close with the enemy; the Naval Battle of Guadalcanal in November 1942 was the sole exception. The four Kongō-class ships performed shore bombardment in Malaya, Singapore, and Guadalcanal and escorted the raid on Ceylon and other carrier forces in 1941–42. Japanese capital ships also participated ineffectively (due to not being engaged) in the Battle of Midway and the simultaneous Aleutian diversion; in both cases they were in battleship groups well to the rear of the carrier groups. Sources state that Yamato sat out the entire Guadalcanal Campaign due to lack of high-explosive bombardment shells, poor nautical charts of the area, and high fuel consumption. It is likely that the poor charts affected other battleships as well. Except for the Kongō class, most Japanese battleships spent the critical year of 1942, in which most of the war's surface actions occurred, in home waters or at the fortified base of Truk, far from any risk of attacking or being attacked.",
"title": "World War II"
},
{
"paragraph_id": 55,
"text": "From 1942 through mid-1943, US and other Allied cruisers were the heavy units on their side of the numerous surface engagements of the Dutch East Indies campaign, the Guadalcanal Campaign, and subsequent Solomon Islands fighting; they were usually opposed by strong Japanese cruiser-led forces equipped with Long Lance torpedoes. Destroyers also participated heavily on both sides of these battles and provided essentially all the torpedoes on the Allied side, with some battles in these campaigns fought entirely between destroyers.",
"title": "World War II"
},
{
"paragraph_id": 56,
"text": "Along with lack of knowledge of the capabilities of the Long Lance torpedo, the US Navy was hampered by a deficiency it was initially unaware of—the unreliability of the Mark 15 torpedo used by destroyers. This weapon shared the Mark 6 exploder and other problems with the more famously unreliable Mark 14 torpedo; the most common results of firing either of these torpedoes were a dud or a miss. The problems with these weapons were not solved until mid-1943, after almost all of the surface actions in the Solomon Islands had taken place. Another factor that shaped the early surface actions was the pre-war training of both sides. The US Navy concentrated on long-range 8-inch gunfire as their primary offensive weapon, leading to rigid battle line tactics, while the Japanese trained extensively for nighttime torpedo attacks. Since all post-1930 Japanese cruisers had 8-inch guns by 1941, almost all of the US Navy's cruisers in the South Pacific in 1942 were the 8-inch-gunned (203 mm) \"treaty cruisers\"; most of the 6-inch-gunned (152 mm) cruisers were deployed in the Atlantic.",
"title": "World War II"
},
{
"paragraph_id": 57,
"text": "Although their battleships were held out of surface action, Japanese cruiser-destroyer forces rapidly isolated and mopped up the Allied naval forces in the Dutch East Indies campaign of February–March 1942. In three separate actions, they sank five Allied cruisers (two Dutch and one each British, Australian, and American) with torpedoes and gunfire, against one Japanese cruiser damaged. With one other Allied cruiser withdrawn for repairs, the only remaining Allied cruiser in the area was the damaged USS Marblehead. Despite their rapid success, the Japanese proceeded methodically, never leaving their air cover and rapidly establishing new air bases as they advanced.",
"title": "World War II"
},
{
"paragraph_id": 58,
"text": "After the key carrier battles of the Coral Sea and Midway in mid-1942, Japan had lost four of the six fleet carriers that launched the Pearl Harbor raid and was on the strategic defensive. On 7 August 1942 US Marines were landed on Guadalcanal and other nearby islands, beginning the Guadalcanal Campaign. This campaign proved to be a severe test for the Navy as well as the Marines. Along with two carrier battles, several major surface actions occurred, almost all at night between cruiser-destroyer forces.",
"title": "World War II"
},
{
"paragraph_id": 59,
"text": "Battle of Savo Island On the night of 8–9 August 1942 the Japanese counterattacked near Guadalcanal in the Battle of Savo Island with a cruiser-destroyer force. In a controversial move, the US carrier task forces were withdrawn from the area on the 8th due to heavy fighter losses and low fuel. The Allied force included six heavy cruisers (two Australian), two light cruisers (one Australian), and eight US destroyers. Of the cruisers, only the Australian ships had torpedoes. The Japanese force included five heavy cruisers, two light cruisers, and one destroyer. Numerous circumstances combined to reduce Allied readiness for the battle. The results of the battle were three American heavy cruisers sunk by torpedoes and gunfire, one Australian heavy cruiser disabled by gunfire and scuttled, one heavy cruiser damaged, and two US destroyers damaged. The Japanese had three cruisers lightly damaged. This was the most lopsided outcome of the surface actions in the Solomon Islands. Along with their superior torpedoes, the opening Japanese gunfire was accurate and very damaging. Subsequent analysis showed that some of the damage was due to poor housekeeping practices by US forces. Stowage of boats and aircraft in midships hangars with full gas tanks contributed to fires, along with full and unprotected ready-service ammunition lockers for the open-mount secondary armament. These practices were soon corrected, and US cruisers with similar damage sank less often thereafter. Savo was the first surface action of the war for almost all the US ships and personnel; few US cruisers and destroyers were targeted or hit at Coral Sea or Midway.",
"title": "World War II"
},
{
"paragraph_id": 60,
"text": "Battle of the Eastern Solomons On 24–25 August 1942 the Battle of the Eastern Solomons, a major carrier action, was fought. Part of the action was a Japanese attempt to reinforce Guadalcanal with men and equipment on troop transports. The Japanese troop convoy was attacked by Allied aircraft, resulting in the Japanese subsequently reinforcing Guadalcanal with troops on fast warships at night. These convoys were called the \"Tokyo Express\" by the Allies. Although the Tokyo Express often ran unopposed, most surface actions in the Solomons revolved around Tokyo Express missions. Also, US air operations had commenced from Henderson Field, the airfield on Guadalcanal. Fear of air power on both sides resulted in all surface actions in the Solomons being fought at night.",
"title": "World War II"
},
{
"paragraph_id": 61,
"text": "Battle of Cape Esperance The Battle of Cape Esperance occurred on the night of 11–12 October 1942. A Tokyo Express mission was underway for Guadalcanal at the same time as a separate cruiser-destroyer bombardment group loaded with high explosive shells for bombarding Henderson Field. A US cruiser-destroyer force was deployed in advance of a convoy of US Army troops for Guadalcanal that was due on 13 October. The Tokyo Express convoy was two seaplane tenders and six destroyers; the bombardment group was three heavy cruisers and two destroyers, and the US force was two heavy cruisers, two light cruisers, and five destroyers. The US force engaged the Japanese bombardment force; the Tokyo Express convoy was able to unload on Guadalcanal and evade action. The bombardment force was sighted at close range (5,000 yards (4,600 m)) and the US force opened fire. The Japanese were surprised because their admiral was anticipating sighting the Tokyo Express force, and withheld fire while attempting to confirm the US ships' identity. One Japanese cruiser and one destroyer were sunk and one cruiser damaged, against one US destroyer sunk with one light cruiser and one destroyer damaged. The bombardment force failed to bring its torpedoes into action, and turned back. The next day US aircraft from Henderson Field attacked several of the Japanese ships, sinking two destroyers and damaging a third. The US victory resulted in overconfidence in some later battles, reflected in the initial after-action report claiming two Japanese heavy cruisers, one light cruiser, and three destroyers sunk by the gunfire of Boise alone. The battle had little effect on the overall situation, as the next night two Kongō-class battleships bombarded and severely damaged Henderson Field unopposed, and the following night another Tokyo Express convoy delivered 4,500 troops to Guadalcanal. The US convoy delivered the Army troops as scheduled on the 13th.",
"title": "World War II"
},
{
"paragraph_id": 62,
"text": "Battle of the Santa Cruz Islands The Battle of the Santa Cruz Islands took place 25–27 October 1942. It was a pivotal battle, as it left the US and Japanese with only two large carriers each in the South Pacific (another large Japanese carrier was damaged and under repair until May 1943). Due to the high carrier attrition rate with no replacements for months, for the most part both sides stopped risking their remaining carriers until late 1943, and each side sent in a pair of battleships instead. The next major carrier operations for the US were the carrier raid on Rabaul and support for the invasion of Tarawa, both in November 1943.",
"title": "World War II"
},
{
"paragraph_id": 63,
"text": "Naval Battle of Guadalcanal The Naval Battle of Guadalcanal occurred 12–15 November 1942 in two phases. A night surface action on 12–13 November was the first phase. The Japanese force consisted of two Kongō-class battleships with high explosive shells for bombarding Henderson Field, one small light cruiser, and 11 destroyers. Their plan was that the bombardment would neutralize Allied airpower and allow a force of 11 transport ships and 12 destroyers to reinforce Guadalcanal with a Japanese division the next day. However, US reconnaissance aircraft spotted the approaching Japanese on the 12th and the Americans made what preparations they could. The American force consisted of two heavy cruisers, one light cruiser, two anti-aircraft cruisers, and eight destroyers. The Americans were outgunned by the Japanese that night, and a lack of pre-battle orders by the US commander led to confusion. The destroyer USS Laffey closed with the battleship Hiei, firing all torpedoes (though apparently none hit or detonated) and raking the battleship's bridge with gunfire, wounding the Japanese admiral and killing his chief of staff. The Americans initially lost four destroyers including Laffey, with both heavy cruisers, most of the remaining destroyers, and both anti-aircraft cruisers damaged. The Japanese initially had one battleship and four destroyers damaged, but at this point they withdrew, possibly unaware that the US force was unable to further oppose them. At dawn US aircraft from Henderson Field, USS Enterprise, and Espiritu Santo found the damaged battleship and two destroyers in the area. The battleship (Hiei) was sunk by aircraft (or possibly scuttled), one destroyer was sunk by the damaged USS Portland, and the other destroyer was attacked by aircraft but was able to withdraw. Both of the damaged US anti-aircraft cruisers were lost on 13 November, one (Juneau) torpedoed by a Japanese submarine, and the other sank on the way to repairs. Juneau's loss was especially tragic; the submarine's presence prevented immediate rescue, over 100 survivors of a crew of nearly 700 were adrift for eight days, and all but ten died. Among the dead were the five Sullivan brothers.",
"title": "World War II"
},
{
"paragraph_id": 64,
"text": "The Japanese transport force was rescheduled for the 14th and a new cruiser-destroyer force (belatedly joined by the surviving battleship Kirishima) was sent to bombard Henderson Field the night of 13 November. Only two cruisers actually bombarded the airfield, as Kirishima had not arrived yet and the remainder of the force was on guard for US warships. The bombardment caused little damage. The cruiser-destroyer force then withdrew, while the transport force continued towards Guadalcanal. Both forces were attacked by US aircraft on the 14th. The cruiser force lost one heavy cruiser sunk and one damaged. Although the transport force had fighter cover from the carrier Jun'yō, six transports were sunk and one heavily damaged. All but four of the destroyers accompanying the transport force picked up survivors and withdrew. The remaining four transports and four destroyers approached Guadalcanal at night, but stopped to await the results of the night's action.",
"title": "World War II"
},
{
"paragraph_id": 65,
"text": "On the night of 14–15 November a Japanese force of Kirishima, two heavy and two light cruisers, and nine destroyers approached Guadalcanal. Two US battleships (Washington and South Dakota) were there to meet them, along with four destroyers. This was one of only two battleship-on-battleship encounters during the Pacific War; the other was the lopsided Battle of Surigao Strait in October 1944, part of the Battle of Leyte Gulf. The battleships had been escorting Enterprise, but were detached due to the urgency of the situation. With nine 16-inch (406 mm) guns apiece against eight 14-inch (356 mm) guns on Kirishima, the Americans had major gun and armor advantages. All four destroyers were sunk or severely damaged and withdrawn shortly after the Japanese attacked them with gunfire and torpedoes. Although her main battery remained in action for most of the battle, South Dakota spent much of the action dealing with major electrical failures that affected her radar, fire control, and radio systems. Although her armor was not penetrated, she was hit by 26 shells of various calibers and temporarily rendered, in a US admiral's words, \"deaf, dumb, blind, and impotent\". Washington went undetected by the Japanese for most of the battle, but withheld shooting to avoid \"friendly fire\" until South Dakota was illuminated by Japanese fire, then rapidly set Kirishima ablaze with a jammed rudder and other damage. Washington, finally spotted by the Japanese, then headed for the Russell Islands to hopefully draw the Japanese away from Guadalcanal and South Dakota, and was successful in evading several torpedo attacks. Unusually, only a few Japanese torpedoes scored hits in this engagement. Kirishima sank or was scuttled before the night was out, along with two Japanese destroyers. The remaining Japanese ships withdrew, except for the four transports, which beached themselves in the night and started unloading. However, dawn (and US aircraft, US artillery, and a US destroyer) found them still beached, and they were destroyed.",
"title": "World War II"
},
{
"paragraph_id": 66,
"text": "Battle of Tassafaronga The Battle of Tassafaronga took place on the night of 30 November – 1 December 1942. The US had four heavy cruisers, one light cruiser, and four destroyers. The Japanese had eight destroyers on a Tokyo Express run to deliver food and supplies in drums to Guadalcanal. The Americans achieved initial surprise, damaging one destroyer with gunfire which later sank, but the Japanese torpedo counterattack was devastating. One American heavy cruiser was sunk and three others heavily damaged, with the bows blown off of two of them. It was significant that these two were not lost to Long Lance hits as happened in previous battles; American battle readiness and damage control had improved. Despite defeating the Americans, the Japanese withdrew without delivering the crucial supplies to Guadalcanal. Another attempt on 3 December dropped 1,500 drums of supplies near Guadalcanal, but Allied strafing aircraft sank all but 300 before the Japanese Army could recover them. On 7 December PT boats interrupted a Tokyo Express run, and the following night sank a Japanese supply submarine. The next day the Japanese Navy proposed stopping all destroyer runs to Guadalcanal, but agreed to do just one more. This was on 11 December and was also intercepted by PT boats, which sank a destroyer; only 200 of 1,200 drums dropped off the island were recovered. The next day the Japanese Navy proposed abandoning Guadalcanal; this was approved by the Imperial General Headquarters on 31 December and the Japanese left the island in early February 1943.",
"title": "World War II"
},
{
"paragraph_id": 67,
"text": "After the Japanese abandoned Guadalcanal in February 1943, Allied operations in the Pacific shifted to the New Guinea campaign and isolating Rabaul. The Battle of Kula Gulf was fought on the night of 5–6 July. The US had three light cruisers and four destroyers; the Japanese had ten destroyers loaded with 2,600 troops destined for Vila to oppose a recent US landing on Rendova. Although the Japanese sank a cruiser, they lost two destroyers and were able to deliver only 850 troops. On the night of 12–13 July, the Battle of Kolombangara occurred. The Allies had three light cruisers (one New Zealand) and ten destroyers; the Japanese had one small light cruiser and five destroyers, a Tokyo Express run for Vila. All three Allied cruisers were heavily damaged, with the New Zealand cruiser put out of action for 25 months by a Long Lance hit. The Allies sank only the Japanese light cruiser, and the Japanese landed 1,200 troops at Vila. Despite their tactical victory, this battle caused the Japanese to use a different route in the future, where they were more vulnerable to destroyer and PT boat attacks.",
"title": "World War II"
},
{
"paragraph_id": 68,
"text": "The Battle of Empress Augusta Bay was fought on the night of 1–2 November 1943, immediately after US Marines invaded Bougainville in the Solomon Islands. A Japanese heavy cruiser was damaged by a nighttime air attack shortly before the battle; it is likely that Allied airborne radar had progressed far enough to allow night operations. The Americans had four of the new Cleveland-class cruisers and eight destroyers. The Japanese had two heavy cruisers, two small light cruisers, and six destroyers. Both sides were plagued by collisions, shells that failed to explode, and mutual skill in dodging torpedoes. The Americans suffered significant damage to three destroyers and light damage to a cruiser, but no losses. The Japanese lost one light cruiser and a destroyer, with four other ships damaged. The Japanese withdrew; the Americans pursued them until dawn, then returned to the landing area to provide anti-aircraft cover.",
"title": "World War II"
},
{
"paragraph_id": 69,
"text": "After the Battle of the Santa Cruz Islands in October 1942, both sides were short of large aircraft carriers. The US suspended major carrier operations until sufficient carriers could be completed to destroy the entire Japanese fleet at once should it appear. The Central Pacific carrier raids and amphibious operations commenced in November 1943 with a carrier raid on Rabaul (preceded and followed by Fifth Air Force attacks) and the bloody but successful invasion of Tarawa. The air attacks on Rabaul crippled the Japanese cruiser force, with four heavy and two light cruisers damaged; they were withdrawn to Truk. The US had built up a force in the Central Pacific of six large, five light, and six escort carriers prior to commencing these operations.",
"title": "World War II"
},
{
"paragraph_id": 70,
"text": "From this point on, US cruisers primarily served as anti-aircraft escorts for carriers and in shore bombardment. The only major Japanese carrier operation after Guadalcanal was the disastrous (for Japan) Battle of the Philippine Sea in June 1944, nicknamed the \"Marianas Turkey Shoot\" by the US Navy.",
"title": "World War II"
},
{
"paragraph_id": 71,
"text": "The Imperial Japanese Navy's last major operation was the Battle of Leyte Gulf, an attempt to dislodge the American invasion of the Philippines in October 1944. The two actions at this battle in which cruisers played a significant role were the Battle off Samar and the Battle of Surigao Strait.",
"title": "World War II"
},
{
"paragraph_id": 72,
"text": "Battle of Surigao Strait The Battle of Surigao Strait was fought on the night of 24–25 October, a few hours before the Battle off Samar. The Japanese had a small battleship group composed of Fusō and Yamashiro, one heavy cruiser, and four destroyers. They were followed at a considerable distance by another small force of two heavy cruisers, a small light cruiser, and four destroyers. Their goal was to head north through Surigao Strait and attack the invasion fleet off Leyte. The Allied force, known as the 7th Fleet Support Force, guarding the strait was overwhelming. It included six battleships (all but one previously damaged in 1941 at Pearl Harbor), four heavy cruisers (one Australian), four light cruisers, and 28 destroyers, plus a force of 39 PT boats. The only advantage to the Japanese was that most of the Allied battleships and cruisers were loaded mainly with high explosive shells, although a significant number of armor-piercing shells were also loaded. The lead Japanese force evaded the PT boats' torpedoes, but were hit hard by the destroyers' torpedoes, losing a battleship. Then they encountered the battleship and cruiser guns. Only one destroyer survived. The engagement is notable for being one of only two occasions in which battleships fired on battleships in the Pacific Theater, the other being the Naval Battle of Guadalcanal. Due to the starting arrangement of the opposing forces, the Allied force was in a \"crossing the T\" position, so this was the last battle in which this occurred, but it was not a planned maneuver. The following Japanese cruiser force had several problems, including a light cruiser damaged by a PT boat and two heavy cruisers colliding, one of which fell behind and was sunk by air attack the next day. An American veteran of Surigao Strait, USS Phoenix, was transferred to Argentina in 1951 as General Belgrano, becoming most famous for being sunk by HMS Conqueror in the Falklands War on 2 May 1982. She was the first ship sunk by a nuclear submarine outside of accidents, and only the second ship sunk by a submarine since World War II.",
"title": "World War II"
},
{
"paragraph_id": 73,
"text": "Battle off Samar At the Battle off Samar, a Japanese battleship group moving towards the invasion fleet off Leyte engaged a minuscule American force known as \"Taffy 3\" (formally Task Unit 77.4.3), composed of six escort carriers with about 28 aircraft each, three destroyers, and four destroyer escorts. The biggest guns in the American force were 5 in (127 mm)/38 caliber guns, while the Japanese had 14 in (356 mm), 16 in (406 mm), and 18.1 in (460 mm) guns. Aircraft from six additional escort carriers also participated for a total of around 330 US aircraft, a mix of F6F Hellcat fighters and TBF Avenger torpedo bombers. The Japanese had four battleships including Yamato, six heavy cruisers, two small light cruisers, and 11 destroyers. The Japanese force had earlier been driven off by air attack, losing Yamato's sister Musashi. Admiral Halsey then decided to use his Third Fleet carrier force to attack the Japanese carrier group, located well to the north of Samar, which was actually a decoy group with few aircraft. The Japanese were desperately short of aircraft and pilots at this point in the war, and Leyte Gulf was the first battle in which kamikaze attacks were used. Due to a tragedy of errors, Halsey took the American battleship force with him, leaving San Bernardino Strait guarded only by the small Seventh Fleet escort carrier force. The battle commenced at dawn on 25 October 1944, shortly after the Battle of Surigao Strait. In the engagement that followed, the Americans exhibited uncanny torpedo accuracy, blowing the bows off several Japanese heavy cruisers. The escort carriers' aircraft also performed very well, attacking with machine guns after their carriers ran out of bombs and torpedoes. The unexpected level of damage, and maneuvering to avoid the torpedoes and air attacks, disorganized the Japanese and caused them to think they faced at least part of the Third Fleet's main force. They had also learned of the defeat a few hours before at Surigao Strait, and did not hear that Halsey's force was busy destroying the decoy fleet. Convinced that the rest of the Third Fleet would arrive soon if it hadn't already, the Japanese withdrew, eventually losing three heavy cruisers sunk with three damaged to air and torpedo attacks. The Americans lost two escort carriers, two destroyers, and one destroyer escort sunk, with three escort carriers, one destroyer, and two destroyer escorts damaged, thus losing over one-third of their engaged force sunk with nearly all the remainder damaged.",
"title": "World War II"
},
{
"paragraph_id": 74,
"text": "The US built cruisers in quantity through the end of the war, notably 14 Baltimore-class heavy cruisers and 27 Cleveland-class light cruisers, along with eight Atlanta-class anti-aircraft cruisers. The Cleveland class was the largest cruiser class ever built in number of ships completed, with nine additional Clevelands completed as light aircraft carriers. The large number of cruisers built was probably due to the significant cruiser losses of 1942 in the Pacific theater (seven American and five other Allied) and the perceived need for several cruisers to escort each of the numerous Essex-class aircraft carriers being built. Losing four heavy and two small light cruisers in 1942, the Japanese built only five light cruisers during the war; these were small ships with six 6.1 in (155 mm) guns each. Losing 20 cruisers in 1940–42, the British completed no heavy cruisers, thirteen light cruisers (Fiji and Minotaur classes), and sixteen anti-aircraft cruisers (Dido class) during the war.",
"title": "World War II"
},
{
"paragraph_id": 75,
"text": "The rise of air power during World War II dramatically changed the nature of naval combat. Even the fastest cruisers could not maneuver quickly enough to evade aerial attack, and aircraft now had torpedoes, allowing moderate-range standoff capabilities. This change led to the end of independent operations by single ships or very small task groups, and for the second half of the 20th century naval operations were based on very large fleets believed able to fend off all but the largest air attacks, though this was not tested by any war in that period. The US Navy became centered around carrier groups, with cruisers and battleships primarily providing anti-aircraft defense and shore bombardment. Until the Harpoon missile entered service in the late 1970s, the US Navy was almost entirely dependent on carrier-based aircraft and submarines for conventionally attacking enemy warships. Lacking aircraft carriers, the Soviet Navy depended on anti-ship cruise missiles; in the 1950s these were primarily delivered from heavy land-based bombers. Soviet submarine-launched cruise missiles at the time were primarily for land attack; but by 1964 anti-ship missiles were deployed in quantity on cruisers, destroyers, and submarines.",
"title": "Late 20th century"
},
{
"paragraph_id": 76,
"text": "The US Navy was aware of the potential missile threat as soon as World War II ended, and had considerable related experience due to Japanese kamikaze attacks in that war. The initial response was to upgrade the light AA armament of new cruisers from 40 mm and 20 mm weapons to twin 3-inch (76 mm)/50 caliber gun mounts. For the longer term, it was thought that gun systems would be inadequate to deal with the missile threat, and by the mid-1950s three naval SAM systems were developed: Talos (long range), Terrier (medium range), and Tartar (short range). Talos and Terrier were nuclear-capable and this allowed their use in anti-ship or shore bombardment roles in the event of nuclear war. Chief of Naval Operations Admiral Arleigh Burke is credited with speeding the development of these systems.",
"title": "Late 20th century"
},
{
"paragraph_id": 77,
"text": "Terrier was initially deployed on two converted Baltimore-class cruisers (CAG), with conversions completed in 1955–56. Further conversions of six Cleveland-class cruisers (CLG) (Galveston and Providence classes), redesign of the Farragut class as guided-missile \"frigates\" (DLG), and development of the Charles F. Adams-class DDGs resulted in the completion of numerous additional guided-missile ships deploying all three systems in 1959–1962. Also completed during this period was the nuclear-powered USS Long Beach, with two Terrier and one Talos launchers, plus an ASROC anti-submarine launcher the World War II conversions lacked. The converted World War II cruisers up to this point retained one or two main battery turrets for shore bombardment. However, in 1962–1964 three additional Baltimore and Oregon City-class cruisers were more extensively converted as the Albany class. These had two Talos and two Tartar launchers plus ASROC and two 5-inch (127 mm) guns for self-defense, and were primarily built to get greater numbers of Talos launchers deployed. Of all these types, only the Farragut DLGs were selected as the design basis for further production, although their Leahy-class successors were significantly larger (5,670 tons standard versus 4,150 tons standard) due to a second Terrier launcher and greater endurance. An economical crew size compared with World War II conversions was probably a factor, as the Leahys required a crew of only 377 versus 1,200 for the Cleveland-class conversions. Through 1980, the ten Farraguts were joined by four additional classes and two one-off ships for a total of 36 guided-missile frigates, eight of them nuclear-powered (DLGN). In 1975 the Farraguts were reclassified as guided-missile destroyers (DDG) due to their small size, and the remaining DLG/DLGN ships became guided-missile cruisers (CG/CGN). The World War II conversions were gradually retired between 1970 and 1980; the Talos missile was withdrawn in 1980 as a cost-saving measure and the Albanys were decommissioned. Long Beach had her Talos launcher removed in a refit shortly thereafter; the deck space was used for Harpoon missiles. Around this time the Terrier ships were upgraded with the RIM-67 Standard ER missile. The guided-missile frigates and cruisers served in the Cold War and the Vietnam War; off Vietnam they performed shore bombardment and shot down enemy aircraft or, as Positive Identification Radar Advisory Zone (PIRAZ) ships, guided fighters to intercept enemy aircraft. By 1995 the former guided-missile frigates were replaced by the Ticonderoga-class cruisers and Arleigh Burke-class destroyers.",
"title": "Late 20th century"
},
{
"paragraph_id": 78,
"text": "The U.S. Navy's guided-missile cruisers were built upon destroyer-style hulls (some called \"destroyer leaders\" or \"frigates\" prior to the 1975 reclassification). As the U.S. Navy's strike role was centered around aircraft carriers, cruisers were primarily designed to provide air defense while often adding anti-submarine capabilities. These U.S. cruisers that were built in the 1960s and 1970s were larger, often nuclear-powered for extended endurance in escorting nuclear-powered fleet carriers, and carried longer-range surface-to-air missiles (SAMs) than early Charles F. Adams guided-missile destroyers that were tasked with the short-range air defense role. The U.S. cruiser was a major contrast to their contemporaries, Soviet \"rocket cruisers\" that were armed with large numbers of anti-ship cruise missiles (ASCMs) as part of the combat doctrine of saturation attack, though in the early 1980s the U.S. Navy retrofitted some of these existing cruisers to carry a small number of Harpoon anti-ship missiles and Tomahawk cruise missiles.",
"title": "Late 20th century"
},
{
"paragraph_id": 79,
"text": "The line between U.S. Navy cruisers and destroyers blurred with the Spruance class. While originally designed for anti-submarine warfare, a Spruance destroyer was comparable in size to existing U.S. cruisers, while having the advantage of an enclosed hangar (with space for up to two medium-lift helicopters) which was a considerable improvement over the basic aviation facilities of earlier cruisers. The Spruance hull design was used as the basis for two classes; the Kidd class which had comparable anti-air capabilities to cruisers at the time, and then the DDG-47-class destroyers which were redesignated as the Ticonderoga-class guided-missile cruisers to emphasize the additional capability provided by the ships' Aegis combat systems, and their flag facilities suitable for an admiral and his staff. In addition, 24 members of the Spruance class were upgraded with the vertical launch system (VLS) for Tomahawk cruise missiles due to its modular hull design, along with the similarly VLS-equipped Ticonderoga class, these ships had anti-surface strike capabilities beyond the 1960s–1970s cruisers that received Tomahawk armored-box launchers as part of the New Threat Upgrade. Like the Ticonderoga ships with VLS, the Arleigh Burke and Zumwalt class, despite being classified as destroyers, actually have much heavier anti-surface armament than previous U.S. ships classified as cruisers.",
"title": "Late 20th century"
},
{
"paragraph_id": 80,
"text": "Prior to the introduction of the Ticonderogas, the US Navy used odd naming conventions that left its fleet seemingly without many cruisers, although a number of their ships were cruisers in all but name. From the 1950s to the 1970s, US Navy cruisers were large vessels equipped with heavy, specialized missiles (mostly surface-to-air, but for several years including the Regulus nuclear cruise missile) for wide-ranging combat against land-based and sea-based targets. All save one—USS Long Beach—were converted from World War II cruisers of the Oregon City, Baltimore and Cleveland classes. Long Beach was also the last cruiser built with a World War II-era cruiser style hull (characterized by a long lean hull); later new-build cruisers were actually converted frigates (DLG/CG USS Bainbridge, USS Truxtun, and the Leahy, Belknap, California, and Virginia classes) or uprated destroyers (the DDG/CG Ticonderoga class was built on a Spruance-class destroyer hull).",
"title": "Late 20th century"
},
{
"paragraph_id": 81,
"text": "Frigates under this scheme were almost as large as the cruisers and optimized for anti-aircraft warfare, although they were capable anti-surface warfare combatants as well. In the late 1960s, the US government perceived a \"cruiser gap\"—at the time, the US Navy possessed six ships designated as cruisers, compared to 19 for the Soviet Union, even though the USN had 21 ships designated as frigates with equal or superior capabilities to the Soviet cruisers at the time. Because of this, in 1975 the Navy performed a massive redesignation of its forces:",
"title": "Late 20th century"
},
{
"paragraph_id": 82,
"text": "Also, a series of Patrol Frigates of the Oliver Hazard Perry class, originally designated PFG, were redesignated into the FFG line. The cruiser-destroyer-frigate realignment and the deletion of the Ocean Escort type brought the US Navy's ship designations into line with the rest of the world's, eliminating confusion with foreign navies. In 1980, the Navy's then-building DDG-47-class destroyers were redesignated as cruisers (Ticonderoga guided-missile cruisers) to emphasize the additional capability provided by the ships' Aegis combat systems, and their flag facilities suitable for an admiral and his staff.",
"title": "Late 20th century"
},
{
"paragraph_id": 83,
"text": "In the Soviet Navy, cruisers formed the basis of combat groups. In the immediate post-war era it built a fleet of gun-armed light cruisers, but replaced these beginning in the early 1960s with large ships called \"rocket cruisers\", carrying large numbers of anti-ship cruise missiles (ASCMs) and anti-aircraft missiles. The Soviet combat doctrine of saturation attack meant that their cruisers (as well as destroyers and even missile boats) mounted multiple missiles in large container/launch tube housings and carried far more ASCMs than their NATO counterparts, while NATO combatants instead used individually smaller and lighter missiles (while appearing under-armed when compared to Soviet ships).",
"title": "Late 20th century"
},
{
"paragraph_id": 84,
"text": "In 1962–1965 the four Kynda-class cruisers entered service; these had launchers for eight long-range SS-N-3 Shaddock ASCMs with a full set of reloads; these had a range of up to 450 kilometres (280 mi) with mid-course guidance. The four more modest Kresta I-class cruisers, with launchers for four SS-N-3 ASCMs and no reloads, entered service in 1967–69. In 1969–79 Soviet cruiser numbers more than tripled with ten Kresta II-class cruisers and seven Kara-class cruisers entering service. These had launchers for eight large-diameter missiles whose purpose was initially unclear to NATO. This was the SS-N-14 Silex, an over/under rocket-delivered heavyweight torpedo primarily for the anti-submarine role, but capable of anti-surface action with a range of up to 90 kilometres (56 mi). Soviet doctrine had shifted; powerful anti-submarine vessels (these were designated \"Large Anti-Submarine Ships\", but were listed as cruisers in most references) were needed to destroy NATO submarines to allow Soviet ballistic missile submarines to get within range of the United States in the event of nuclear war. By this time Long Range Aviation and the Soviet submarine force could deploy numerous ASCMs. Doctrine later shifted back to overwhelming carrier group defenses with ASCMs, with the Slava and Kirov classes.",
"title": "Late 20th century"
},
{
"paragraph_id": 85,
"text": "The most recent Soviet/Russian rocket cruisers, the four Kirov-class battlecruisers, were built in the 1970s and 1980s. One of the Kirov class is in refit, and 2 are being scrapped, with the Pyotr Velikiy in active service. Russia also operates two Slava-class cruisers and one Admiral Kuznetsov-class carrier which is officially designated as a cruiser, specifically a \"heavy aviation cruiser\" (Russian: тяжелый авианесущий крейсер) due to her complement of 12 P-700 Granit supersonic AShMs.",
"title": "Late 20th century"
},
{
"paragraph_id": 86,
"text": "Currently, the Kirov-class heavy missile cruisers are used for command purposes, as Pyotr Velikiy is the flagship of the Northern Fleet. However, their air defense capabilities are still powerful, as shown by the array of point defense missiles they carry, from 44 OSA-MA missiles to 196 9K311 Tor missiles. For longer range targets, the S-300 is used. For closer range targets, AK-630 or Kashtan CIWSs are used. Aside from that, Kirovs have 20 P-700 Granit missiles for anti-ship warfare. For target acquisition beyond the radar horizon, three helicopters can be used. Besides a vast array of armament, Kirov-class cruisers are also outfitted with many sensors and communications equipment, allowing them to lead the fleet.",
"title": "Late 20th century"
},
{
"paragraph_id": 87,
"text": "The United States Navy has centered on the aircraft carrier since World War II. The Ticonderoga-class cruisers, built in the 1980s, were originally designed and designated as a class of destroyer, intended to provide a very powerful air-defense in these carrier-centered fleets. Outside the US and Soviet navies, new cruisers were rare following World War II. Most navies use guided-missile destroyers for fleet air defense, and destroyers and frigates for cruise missiles. The need to operate in task forces has led most navies to change to fleets designed around ships dedicated to a single role, anti-submarine or anti-aircraft typically, and the large \"generalist\" ship has disappeared from most forces. The United States Navy, the Russian Navy and the Italian Navy are the only remaining navies which operate active duty cruisers. Italy used Vittorio Veneto until 2003 (decommissioned in 2006) and continues to use Giuseppe Garibaldi as of 2023; France operated a single helicopter cruiser until May 2010, Jeanne d'Arc, for training purposes only. While Type 055 of the Chinese Navy is classified as a cruiser by the U.S. Department of Defense, the Chinese consider it a guided-missile destroyer.",
"title": "Late 20th century"
},
{
"paragraph_id": 88,
"text": "In the years since the launch of Ticonderoga in 1981, the class has received a number of upgrades that have dramatically improved its members' capabilities for anti-submarine and land attack (using the Tomahawk missile). Like their Soviet counterparts, the modern Ticonderogas can also be used as the basis for an entire battle group. Their cruiser designation was almost certainly deserved when first built, as their sensors and combat management systems enable them to act as flagships for a surface warship flotilla if no carrier is present, but newer ships rated as destroyers and also equipped with Aegis approach them very closely in capability, and once more blur the line between the two classes.",
"title": "Late 20th century"
},
{
"paragraph_id": 89,
"text": "If the Ukrainian account of the sinking of the Russian cruiser Moskva is proven correct then it raises questions about the vulnerability of surface ships against cruise missiles. The ship was only hit by two brand new, and virtually untested, R-360 Neptune missiles.",
"title": "Late 20th century"
},
{
"paragraph_id": 90,
"text": "From time to time, some navies have experimented with aircraft-carrying cruisers. One example is the Swedish Gotland. Another was the Japanese Mogami, which was converted to carry a large floatplane group in 1942. Another variant is the helicopter cruiser. The last example in service was the Soviet Navy's Kiev class, whose last unit Admiral Gorshkov was converted to a pure aircraft carrier and sold to India as INS Vikramaditya. The Russian Navy's Admiral Kuznetsov is nominally designated as an aviation cruiser but otherwise resembles a standard medium aircraft carrier, albeit with a surface-to-surface missile battery. The Royal Navy's aircraft-carrying Invincible class and the Italian Navy's aircraft-carrying Giuseppe Garibaldi vessels were originally designated 'through-deck cruisers', but have since been designated as small aircraft carriers (although the 'C' in the pennant for Giuseppe Garibaldi indicates it retains some status as an aircraft-carrying cruiser). Similarly, the Japan Maritime Self-Defense Force's Hyūga-class \"helicopter destroyers\" are really more along the lines of helicopter cruisers in function and aircraft complement, but due to the Treaty of San Francisco, must be designated as destroyers.",
"title": "Late 20th century"
},
{
"paragraph_id": 91,
"text": "One cruiser alternative studied in the late 1980s by the United States was variously entitled a Mission Essential Unit (MEU) or CG V/STOL. In a return to the thoughts of the independent operations cruiser-carriers of the 1930s and the Soviet Kiev class, the ship was to be fitted with a hangar, elevators, and a flight deck. The mission systems were Aegis, SQS-53 sonar, 12 SV-22 ASW aircraft and 200 VLS cells. The resulting ship would have had a waterline length of 700 feet, a waterline beam of 97 feet, and a displacement of about 25,000 tons. Other features included an integrated electric drive and advanced computer systems, both stand-alone and networked. It was part of the U.S. Navy's \"Revolution at Sea\" effort. The project was curtailed by the sudden end of the Cold War and its aftermath, otherwise the first of class would have been likely ordered in the early 1990s.",
"title": "Late 20th century"
},
{
"paragraph_id": 92,
"text": "Few cruisers are still operational in the world's navies. Those that remain in service today are:",
"title": "Operators"
},
{
"paragraph_id": 93,
"text": "The following is laid up:",
"title": "Operators"
},
{
"paragraph_id": 94,
"text": "The following are classified as destroyers by their respective operators, but, due to their size and capabilities, are considered to be cruisers by some, all having full load displacements of at least 10,000 tons:",
"title": "Operators"
},
{
"paragraph_id": 95,
"text": "As of 2019, several decommissioned cruisers have been saved from scrapping and exist worldwide as museum ships. They are:",
"title": "Museum ships"
}
] | A cruiser is a type of warship. Modern cruisers are generally the largest ships in a fleet after aircraft carriers and amphibious assault ships, and can usually perform several roles. The term "cruiser", which has been in use for several hundred years, has changed its meaning over time. During the Age of Sail, the term cruising referred to certain kinds of missions—independent scouting, commerce protection, or raiding—usually fulfilled by frigates or sloops-of-war, which functioned as the cruising warships of a fleet. In the middle of the 19th century, cruiser came to be a classification of the ships intended for cruising distant waters, for commerce raiding, and for scouting for the battle fleet. Cruisers came in a wide variety of sizes, from the medium-sized protected cruiser to large armored cruisers that were nearly as big as a pre-dreadnought battleship. With the advent of the dreadnought battleship before World War I, the armored cruiser evolved into a vessel of similar scale known as the battlecruiser. The very large battlecruisers of the World War I era that succeeded armored cruisers were now classified, along with dreadnought battleships, as capital ships. By the early 20th century, after World War I, the direct successors to protected cruisers could be placed on a consistent scale of warship size, smaller than a battleship but larger than a destroyer. In 1922, the Washington Naval Treaty placed a formal limit on these cruisers, which were defined as warships of up to 10,000 tons displacement carrying guns no larger than 8 inches in calibre; whilst the 1930 London Naval Treaty created a divide of two cruiser types, heavy cruisers having 6.1 inches to 8 inch guns, while those with guns of 6.1 inches or less were light cruisers. Each type were limited in total and individual tonnage which shaped cruiser design until the collapse of the treaty system just prior to the start of World War II. Some variations on the Treaty cruiser design included the German Deutschland-class "pocket battleships", which had heavier armament at the expense of speed compared to standard heavy cruisers, and the American Alaska class, which was a scaled-up heavy cruiser design designated as a "cruiser-killer". In the later 20th century, the obsolescence of the battleship left the cruiser as the largest and most powerful surface combatant ships. The role of the cruiser varied according to ship and navy, often including air defense and shore bombardment. During the Cold War the Soviet Navy's cruisers had heavy anti-ship missile armament designed to sink NATO carrier task-forces via saturation attack. The U.S. Navy built guided-missile cruisers upon destroyer-style hulls primarily designed to provide air defense while often adding anti-submarine capabilities, being larger and having longer-range surface-to-air missiles (SAMs) than early Charles F. Adams guided-missile destroyers tasked with the short-range air defense role. By the end of the Cold War the line between cruisers and destroyers had blurred, with the Ticonderoga-class cruiser using the hull of the Spruance-class destroyer but receiving the cruiser designation due to their enhanced mission and combat systems. As of 2023, only three countries operate active duty vessels formally classed as cruisers: the United States, Russia and Italy. These cruisers are primarily armed with guided missiles, with the exceptions of the aircraft cruisers Admiral Kuznetsov and Giuseppe Garibaldi. BAP Almirante Grau was the last gun cruiser in service, serving with the Peruvian Navy until 2017. Nevertheless, other classes in addition to the above may be considered cruisers due to differing classification systems. The US/NATO system includes the Type 055 from China and the Kirov and Slava from Russia. International Institute for Strategic Studies' "The Military Balance" defines a cruiser as a surface combatant displacing at least 9750 tonnes; with respect to vessels in service as of the early 2020s it includes the Type 055, the Sejong the Great from South Korea, the Atago and Maya from Japan and the Ticonderoga and Zumwalt from the US. | 2001-11-07T03:23:31Z | 2023-12-30T07:36:32Z | [
"Template:Sclass",
"Template:As of",
"Template:See also",
"Template:Cite encyclopedia",
"Template:Cite news",
"Template:Pp-move",
"Template:Main",
"Template:Convert",
"Template:Citation needed",
"Template:Naval",
"Template:Reflist",
"Template:Cite web",
"Template:Ship types",
"Template:Short description",
"Template:Pp-pc",
"Template:HMAS",
"Template:Lang-ru",
"Template:Hatgrp",
"Template:HMS",
"Template:Update",
"Template:Cite book",
"Template:Authority control",
"Template:Ship",
"Template:'",
"Template:Navy",
"Template:Flagicon",
"Template:Cite journal",
"Template:Warship types of the 19th & 20th centuries",
"Template:USS",
"Template:Cn",
"Template:Cite report",
"Template:Sclass2",
"Template:ISBN"
] | https://en.wikipedia.org/wiki/Cruiser |
7,037 | Chlamydia | Chlamydia, or more specifically a chlamydia infection, is a sexually transmitted infection caused by the bacterium Chlamydia trachomatis. Most people who are infected have no symptoms. When symptoms do appear they may occur only several weeks after infection; the incubation period between exposure and being able to infect others is thought to be on the order of two to six weeks. Symptoms in women may include vaginal discharge or burning with urination. Symptoms in men may include discharge from the penis, burning with urination, or pain and swelling of one or both testicles. The infection can spread to the upper genital tract in women, causing pelvic inflammatory disease, which may result in future infertility or ectopic pregnancy.
Chlamydia infections can occur in other areas besides the genitals, including the anus, eyes, throat, and lymph nodes. Repeated chlamydia infections of the eyes that go without treatment can result in trachoma, a common cause of blindness in the developing world.
Chlamydia can be spread during vaginal, anal, oral, or manual sex and can be passed from an infected mother to her baby during childbirth. The eye infections may also be spread by personal contact, flies, and contaminated towels in areas with poor sanitation. Infection by the bacterium Chlamydia trachomatis only occurs in humans. Diagnosis is often by screening which is recommended yearly in sexually active women under the age of twenty-five, others at higher risk, and at the first prenatal visit. Testing can be done on the urine or a swab of the cervix, vagina, or urethra. Rectal or mouth swabs are required to diagnose infections in those areas.
Prevention is by not having sex, the use of condoms, or having sex with only one other person, who is not infected. Chlamydia can be cured by antibiotics with typically either azithromycin or doxycycline being used. Erythromycin or azithromycin is recommended in babies and during pregnancy. Sexual partners should also be treated, and infected people should be advised not to have sex for seven days and until symptom free. Gonorrhea, syphilis, and HIV should be tested for in those who have been infected. Following treatment people should be tested again after three months.
Chlamydia is one of the most common sexually transmitted infections, affecting about 4.2% of women and 2.7% of men worldwide. In 2015, about 61 million new cases occurred globally. In the United States about 1.4 million cases were reported in 2014. Infections are most common among those between the ages of 15 and 25 and are more common in women than men. In 2015 infections resulted in about 200 deaths. The word chlamydia is from the Greek χλαμύδα, meaning 'cloak'.
Chlamydial infection of the cervix (neck of the womb) is a sexually transmitted infection which has no symptoms for around 70% of women infected. The infection can be passed through vaginal, anal, oral, or manual sex. Of those who have an asymptomatic infection that is not detected by their doctor, approximately half will develop pelvic inflammatory disease (PID), a generic term for infection of the uterus, fallopian tubes, and/or ovaries. PID can cause scarring inside the reproductive organs, which can later cause serious complications, including chronic pelvic pain, difficulty becoming pregnant, ectopic (tubal) pregnancy, and other dangerous complications of pregnancy. Chlamydia is known as the "silent epidemic", as at least 70% of genital C. trachomatis infections in women (and 50% in men) are asymptomatic at the time of diagnosis, and can linger for months or years before being discovered. Signs and symptoms may include abnormal vaginal bleeding or discharge, abdominal pain, painful sexual intercourse, fever, painful urination or the urge to urinate more often than usual (urinary urgency). For sexually active women who are not pregnant, screening is recommended in those under 25 and others at risk of infection. Risk factors include a history of chlamydial or other sexually transmitted infection, new or multiple sexual partners, and inconsistent condom use. Guidelines recommend all women attending for emergency contraceptive are offered chlamydia testing, with studies showing up to 9% of women aged <25 years had chlamydia.
In men, those with a chlamydial infection show symptoms of infectious inflammation of the urethra in about 50% of cases. Symptoms that may occur include: a painful or burning sensation when urinating, an unusual discharge from the penis, testicular pain or swelling, or fever. If left untreated, chlamydia in men can spread to the testicles causing epididymitis, which in rare cases can lead to sterility if not treated. Chlamydia is also a potential cause of prostatic inflammation in men, although the exact relevance in prostatitis is difficult to ascertain due to possible contamination from urethritis.
Trachoma is a chronic conjunctivitis caused by Chlamydia trachomatis. It was once the leading cause of blindness worldwide, but its role diminished from 15% of blindness cases by trachoma in 1995 to 3.6% in 2002. The infection can be spread from eye to eye by fingers, shared towels or cloths, coughing and sneezing and eye-seeking flies. Symptoms include mucopurulent ocular discharge, irritation, redness, and lid swelling. Newborns can also develop chlamydia eye infection through childbirth (see below). Using the SAFE strategy (acronym for surgery for in-growing or in-turned lashes, antibiotics, facial cleanliness, and environmental improvements), the World Health Organization aimed (unsuccessfully) for the global elimination of trachoma by 2020 (GET 2020 initiative). The updated World Health Assembly neglected tropical diseases road map (2021–2030) sets 2030 as the new timeline for global elimination.
Chlamydia may also cause reactive arthritis—the triad of arthritis, conjunctivitis and urethral inflammation—especially in young men. About 15,000 men develop reactive arthritis due to chlamydia infection each year in the U.S., and about 5,000 are permanently affected by it. It can occur in both sexes, though is more common in men.
As many as half of all infants born to mothers with chlamydia will be born with the disease. Chlamydia can affect infants by causing spontaneous abortion; premature birth; conjunctivitis, which may lead to blindness; and pneumonia. Conjunctivitis due to chlamydia typically occurs one week after birth (compared with chemical causes (within hours) or gonorrhea (2–5 days)).
A different serovar of Chlamydia trachomatis is also the cause of lymphogranuloma venereum, an infection of the lymph nodes and lymphatics. It usually presents with genital ulceration and swollen lymph nodes in the groin, but it may also manifest as rectal inflammation, fever or swollen lymph nodes in other regions of the body.
Chlamydia can be transmitted during vaginal, anal, oral, or manual sex or direct contact with infected tissue such as conjunctiva. Chlamydia can also be passed from an infected mother to her baby during vaginal childbirth. It is assumed that the probability of becoming infected is proportionate to the number of bacteria one is exposed to.
Chlamydiae have the ability to establish long-term associations with host cells. When an infected host cell is starved for various nutrients such as amino acids (for example, tryptophan), iron, or vitamins, this has a negative consequence for Chlamydiae since the organism is dependent on the host cell for these nutrients. Long-term cohort studies indicate that approximately 50% of those infected clear within a year, 80% within two years, and 90% within three years.
The starved chlamydiae enter a persistent growth state wherein they stop cell division and become morphologically aberrant by increasing in size. Persistent organisms remain viable as they are capable of returning to a normal growth state once conditions in the host cell improve.
There is debate as to whether persistence has relevance. Some believe that persistent chlamydiae are the cause of chronic chlamydial diseases. Some antibiotics such as β-lactams have been found to induce a persistent-like growth state.
The diagnosis of genital chlamydial infections evolved rapidly from the 1990s through 2006. Nucleic acid amplification tests (NAAT), such as polymerase chain reaction (PCR), transcription mediated amplification (TMA), and the DNA strand displacement amplification (SDA) now are the mainstays. NAAT for chlamydia may be performed on swab specimens sampled from the cervix (women) or urethra (men), on self-collected vaginal swabs, or on voided urine. NAAT has been estimated to have a sensitivity of approximately 90% and a specificity of approximately 99%, regardless of sampling from a cervical swab or by urine specimen. In women seeking an sexually transmitted infection (STI) clinic and a urine test is negative, a subsequent cervical swab has been estimated to be positive in approximately 2% of the time.
At present, the NAATs have regulatory approval only for testing urogenital specimens, although rapidly evolving research indicates that they may give reliable results on rectal specimens.
Because of improved test accuracy, ease of specimen management, convenience in specimen management, and ease of screening sexually active men and women, the NAATs have largely replaced culture, the historic gold standard for chlamydia diagnosis, and the non-amplified probe tests. The latter test is relatively insensitive, successfully detecting only 60–80% of infections in asymptomatic women, and often giving falsely-positive results. Culture remains useful in selected circumstances and is currently the only assay approved for testing non-genital specimens. Other methods also exist including: ligase chain reaction (LCR), direct fluorescent antibody resting, enzyme immunoassay, and cell culture.
The swab sample for chlamydial infections does not show difference whether the sample was collected in home or in clinic in term of number patient treated. The implications in cured patient, reinfection, partner management, and safety are unknown.
Rapid point-of-care tests are, as of 2020, not thought to be effective for diagnosing chlamydia in men of reproductive age and nonpregnant women because of high false-negative rates.
Prevention is by not having sex, the use of condoms, or having sex with only one other person, who is not infected.
For sexually active women who are not pregnant, screening is recommended in those under 25 and others at risk of infection. Risk factors include a history of chlamydial or other sexually transmitted infection, new or multiple sexual partners, and inconsistent condom use. For pregnant women, guidelines vary: screening women with age or other risk factors is recommended by the U.S. Preventive Services Task Force (USPSTF) (which recommends screening women under 25) and the American Academy of Family Physicians (which recommends screening women aged 25 or younger). The American College of Obstetricians and Gynecologists recommends screening all at risk, while the Centers for Disease Control and Prevention recommend universal screening of pregnant women. The USPSTF acknowledges that in some communities there may be other risk factors for infection, such as ethnicity. Evidence-based recommendations for screening initiation, intervals and termination are currently not possible. For men, the USPSTF concludes evidence is currently insufficient to determine if regular screening of men for chlamydia is beneficial. They recommend regular screening of men who are at increased risk for HIV or syphilis infection. A Cochrane review found that the effects of screening are uncertain in terms of chlamydia transmission but that screening probably reduces the risk of pelvic inflammatory disease in women.
In the United Kingdom the National Health Service (NHS) aims to:
C. trachomatis infection can be effectively cured with antibiotics. Guidelines recommend azithromycin, doxycycline, erythromycin, levofloxacin or ofloxacin. In men, doxycycline (100 mg twice a day for 7 days) is probably more effective than azithromycin (1 g single dose) but evidence for the relative effectiveness of antibiotics in women is very uncertain. Agents recommended during pregnancy include erythromycin or amoxicillin.
An option for treating sexual partners of those with chlamydia or gonorrhea includes patient-delivered partner therapy (PDT or PDPT), which is the practice of treating the sex partners of index cases by providing prescriptions or medications to the patient to take to his/her partner without the health care provider first examining the partner.
Following treatment people should be tested again after three months to check for reinfection.
Globally, as of 2015, sexually transmitted chlamydia affects approximately 61 million people. It is more common in women (3.8%) than men (2.5%). In 2015 it resulted in about 200 deaths.
In the United States about 1.6 million cases were reported in 2016. The CDC estimates that if one includes unreported cases there are about 2.9 million each year. It affects around 2% of young people. Chlamydial infection is the most common bacterial sexually transmitted infection in the UK.
Chlamydia causes more than 250,000 cases of epididymitis in the U.S. each year. Chlamydia causes 250,000 to 500,000 cases of PID every year in the United States. Women infected with chlamydia are up to five times more likely to become infected with HIV, if exposed. | [
{
"paragraph_id": 0,
"text": "Chlamydia, or more specifically a chlamydia infection, is a sexually transmitted infection caused by the bacterium Chlamydia trachomatis. Most people who are infected have no symptoms. When symptoms do appear they may occur only several weeks after infection; the incubation period between exposure and being able to infect others is thought to be on the order of two to six weeks. Symptoms in women may include vaginal discharge or burning with urination. Symptoms in men may include discharge from the penis, burning with urination, or pain and swelling of one or both testicles. The infection can spread to the upper genital tract in women, causing pelvic inflammatory disease, which may result in future infertility or ectopic pregnancy.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Chlamydia infections can occur in other areas besides the genitals, including the anus, eyes, throat, and lymph nodes. Repeated chlamydia infections of the eyes that go without treatment can result in trachoma, a common cause of blindness in the developing world.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Chlamydia can be spread during vaginal, anal, oral, or manual sex and can be passed from an infected mother to her baby during childbirth. The eye infections may also be spread by personal contact, flies, and contaminated towels in areas with poor sanitation. Infection by the bacterium Chlamydia trachomatis only occurs in humans. Diagnosis is often by screening which is recommended yearly in sexually active women under the age of twenty-five, others at higher risk, and at the first prenatal visit. Testing can be done on the urine or a swab of the cervix, vagina, or urethra. Rectal or mouth swabs are required to diagnose infections in those areas.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Prevention is by not having sex, the use of condoms, or having sex with only one other person, who is not infected. Chlamydia can be cured by antibiotics with typically either azithromycin or doxycycline being used. Erythromycin or azithromycin is recommended in babies and during pregnancy. Sexual partners should also be treated, and infected people should be advised not to have sex for seven days and until symptom free. Gonorrhea, syphilis, and HIV should be tested for in those who have been infected. Following treatment people should be tested again after three months.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Chlamydia is one of the most common sexually transmitted infections, affecting about 4.2% of women and 2.7% of men worldwide. In 2015, about 61 million new cases occurred globally. In the United States about 1.4 million cases were reported in 2014. Infections are most common among those between the ages of 15 and 25 and are more common in women than men. In 2015 infections resulted in about 200 deaths. The word chlamydia is from the Greek χλαμύδα, meaning 'cloak'.",
"title": ""
},
{
"paragraph_id": 5,
"text": "Chlamydial infection of the cervix (neck of the womb) is a sexually transmitted infection which has no symptoms for around 70% of women infected. The infection can be passed through vaginal, anal, oral, or manual sex. Of those who have an asymptomatic infection that is not detected by their doctor, approximately half will develop pelvic inflammatory disease (PID), a generic term for infection of the uterus, fallopian tubes, and/or ovaries. PID can cause scarring inside the reproductive organs, which can later cause serious complications, including chronic pelvic pain, difficulty becoming pregnant, ectopic (tubal) pregnancy, and other dangerous complications of pregnancy. Chlamydia is known as the \"silent epidemic\", as at least 70% of genital C. trachomatis infections in women (and 50% in men) are asymptomatic at the time of diagnosis, and can linger for months or years before being discovered. Signs and symptoms may include abnormal vaginal bleeding or discharge, abdominal pain, painful sexual intercourse, fever, painful urination or the urge to urinate more often than usual (urinary urgency). For sexually active women who are not pregnant, screening is recommended in those under 25 and others at risk of infection. Risk factors include a history of chlamydial or other sexually transmitted infection, new or multiple sexual partners, and inconsistent condom use. Guidelines recommend all women attending for emergency contraceptive are offered chlamydia testing, with studies showing up to 9% of women aged <25 years had chlamydia.",
"title": "Signs and symptoms"
},
{
"paragraph_id": 6,
"text": "In men, those with a chlamydial infection show symptoms of infectious inflammation of the urethra in about 50% of cases. Symptoms that may occur include: a painful or burning sensation when urinating, an unusual discharge from the penis, testicular pain or swelling, or fever. If left untreated, chlamydia in men can spread to the testicles causing epididymitis, which in rare cases can lead to sterility if not treated. Chlamydia is also a potential cause of prostatic inflammation in men, although the exact relevance in prostatitis is difficult to ascertain due to possible contamination from urethritis.",
"title": "Signs and symptoms"
},
{
"paragraph_id": 7,
"text": "Trachoma is a chronic conjunctivitis caused by Chlamydia trachomatis. It was once the leading cause of blindness worldwide, but its role diminished from 15% of blindness cases by trachoma in 1995 to 3.6% in 2002. The infection can be spread from eye to eye by fingers, shared towels or cloths, coughing and sneezing and eye-seeking flies. Symptoms include mucopurulent ocular discharge, irritation, redness, and lid swelling. Newborns can also develop chlamydia eye infection through childbirth (see below). Using the SAFE strategy (acronym for surgery for in-growing or in-turned lashes, antibiotics, facial cleanliness, and environmental improvements), the World Health Organization aimed (unsuccessfully) for the global elimination of trachoma by 2020 (GET 2020 initiative). The updated World Health Assembly neglected tropical diseases road map (2021–2030) sets 2030 as the new timeline for global elimination.",
"title": "Signs and symptoms"
},
{
"paragraph_id": 8,
"text": "Chlamydia may also cause reactive arthritis—the triad of arthritis, conjunctivitis and urethral inflammation—especially in young men. About 15,000 men develop reactive arthritis due to chlamydia infection each year in the U.S., and about 5,000 are permanently affected by it. It can occur in both sexes, though is more common in men.",
"title": "Signs and symptoms"
},
{
"paragraph_id": 9,
"text": "As many as half of all infants born to mothers with chlamydia will be born with the disease. Chlamydia can affect infants by causing spontaneous abortion; premature birth; conjunctivitis, which may lead to blindness; and pneumonia. Conjunctivitis due to chlamydia typically occurs one week after birth (compared with chemical causes (within hours) or gonorrhea (2–5 days)).",
"title": "Signs and symptoms"
},
{
"paragraph_id": 10,
"text": "A different serovar of Chlamydia trachomatis is also the cause of lymphogranuloma venereum, an infection of the lymph nodes and lymphatics. It usually presents with genital ulceration and swollen lymph nodes in the groin, but it may also manifest as rectal inflammation, fever or swollen lymph nodes in other regions of the body.",
"title": "Signs and symptoms"
},
{
"paragraph_id": 11,
"text": "Chlamydia can be transmitted during vaginal, anal, oral, or manual sex or direct contact with infected tissue such as conjunctiva. Chlamydia can also be passed from an infected mother to her baby during vaginal childbirth. It is assumed that the probability of becoming infected is proportionate to the number of bacteria one is exposed to.",
"title": "Transmission"
},
{
"paragraph_id": 12,
"text": "Chlamydiae have the ability to establish long-term associations with host cells. When an infected host cell is starved for various nutrients such as amino acids (for example, tryptophan), iron, or vitamins, this has a negative consequence for Chlamydiae since the organism is dependent on the host cell for these nutrients. Long-term cohort studies indicate that approximately 50% of those infected clear within a year, 80% within two years, and 90% within three years.",
"title": "Pathophysiology"
},
{
"paragraph_id": 13,
"text": "The starved chlamydiae enter a persistent growth state wherein they stop cell division and become morphologically aberrant by increasing in size. Persistent organisms remain viable as they are capable of returning to a normal growth state once conditions in the host cell improve.",
"title": "Pathophysiology"
},
{
"paragraph_id": 14,
"text": "There is debate as to whether persistence has relevance. Some believe that persistent chlamydiae are the cause of chronic chlamydial diseases. Some antibiotics such as β-lactams have been found to induce a persistent-like growth state.",
"title": "Pathophysiology"
},
{
"paragraph_id": 15,
"text": "The diagnosis of genital chlamydial infections evolved rapidly from the 1990s through 2006. Nucleic acid amplification tests (NAAT), such as polymerase chain reaction (PCR), transcription mediated amplification (TMA), and the DNA strand displacement amplification (SDA) now are the mainstays. NAAT for chlamydia may be performed on swab specimens sampled from the cervix (women) or urethra (men), on self-collected vaginal swabs, or on voided urine. NAAT has been estimated to have a sensitivity of approximately 90% and a specificity of approximately 99%, regardless of sampling from a cervical swab or by urine specimen. In women seeking an sexually transmitted infection (STI) clinic and a urine test is negative, a subsequent cervical swab has been estimated to be positive in approximately 2% of the time.",
"title": "Diagnosis"
},
{
"paragraph_id": 16,
"text": "At present, the NAATs have regulatory approval only for testing urogenital specimens, although rapidly evolving research indicates that they may give reliable results on rectal specimens.",
"title": "Diagnosis"
},
{
"paragraph_id": 17,
"text": "Because of improved test accuracy, ease of specimen management, convenience in specimen management, and ease of screening sexually active men and women, the NAATs have largely replaced culture, the historic gold standard for chlamydia diagnosis, and the non-amplified probe tests. The latter test is relatively insensitive, successfully detecting only 60–80% of infections in asymptomatic women, and often giving falsely-positive results. Culture remains useful in selected circumstances and is currently the only assay approved for testing non-genital specimens. Other methods also exist including: ligase chain reaction (LCR), direct fluorescent antibody resting, enzyme immunoassay, and cell culture.",
"title": "Diagnosis"
},
{
"paragraph_id": 18,
"text": "The swab sample for chlamydial infections does not show difference whether the sample was collected in home or in clinic in term of number patient treated. The implications in cured patient, reinfection, partner management, and safety are unknown.",
"title": "Diagnosis"
},
{
"paragraph_id": 19,
"text": "Rapid point-of-care tests are, as of 2020, not thought to be effective for diagnosing chlamydia in men of reproductive age and nonpregnant women because of high false-negative rates.",
"title": "Diagnosis"
},
{
"paragraph_id": 20,
"text": "Prevention is by not having sex, the use of condoms, or having sex with only one other person, who is not infected.",
"title": "Prevention"
},
{
"paragraph_id": 21,
"text": "For sexually active women who are not pregnant, screening is recommended in those under 25 and others at risk of infection. Risk factors include a history of chlamydial or other sexually transmitted infection, new or multiple sexual partners, and inconsistent condom use. For pregnant women, guidelines vary: screening women with age or other risk factors is recommended by the U.S. Preventive Services Task Force (USPSTF) (which recommends screening women under 25) and the American Academy of Family Physicians (which recommends screening women aged 25 or younger). The American College of Obstetricians and Gynecologists recommends screening all at risk, while the Centers for Disease Control and Prevention recommend universal screening of pregnant women. The USPSTF acknowledges that in some communities there may be other risk factors for infection, such as ethnicity. Evidence-based recommendations for screening initiation, intervals and termination are currently not possible. For men, the USPSTF concludes evidence is currently insufficient to determine if regular screening of men for chlamydia is beneficial. They recommend regular screening of men who are at increased risk for HIV or syphilis infection. A Cochrane review found that the effects of screening are uncertain in terms of chlamydia transmission but that screening probably reduces the risk of pelvic inflammatory disease in women.",
"title": "Prevention"
},
{
"paragraph_id": 22,
"text": "In the United Kingdom the National Health Service (NHS) aims to:",
"title": "Prevention"
},
{
"paragraph_id": 23,
"text": "C. trachomatis infection can be effectively cured with antibiotics. Guidelines recommend azithromycin, doxycycline, erythromycin, levofloxacin or ofloxacin. In men, doxycycline (100 mg twice a day for 7 days) is probably more effective than azithromycin (1 g single dose) but evidence for the relative effectiveness of antibiotics in women is very uncertain. Agents recommended during pregnancy include erythromycin or amoxicillin.",
"title": "Treatment"
},
{
"paragraph_id": 24,
"text": "An option for treating sexual partners of those with chlamydia or gonorrhea includes patient-delivered partner therapy (PDT or PDPT), which is the practice of treating the sex partners of index cases by providing prescriptions or medications to the patient to take to his/her partner without the health care provider first examining the partner.",
"title": "Treatment"
},
{
"paragraph_id": 25,
"text": "Following treatment people should be tested again after three months to check for reinfection.",
"title": "Treatment"
},
{
"paragraph_id": 26,
"text": "Globally, as of 2015, sexually transmitted chlamydia affects approximately 61 million people. It is more common in women (3.8%) than men (2.5%). In 2015 it resulted in about 200 deaths.",
"title": "Epidemiology"
},
{
"paragraph_id": 27,
"text": "In the United States about 1.6 million cases were reported in 2016. The CDC estimates that if one includes unreported cases there are about 2.9 million each year. It affects around 2% of young people. Chlamydial infection is the most common bacterial sexually transmitted infection in the UK.",
"title": "Epidemiology"
},
{
"paragraph_id": 28,
"text": "Chlamydia causes more than 250,000 cases of epididymitis in the U.S. each year. Chlamydia causes 250,000 to 500,000 cases of PID every year in the United States. Women infected with chlamydia are up to five times more likely to become infected with HIV, if exposed.",
"title": "Epidemiology"
}
] | Chlamydia, or more specifically a chlamydia infection, is a sexually transmitted infection caused by the bacterium Chlamydia trachomatis. Most people who are infected have no symptoms. When symptoms do appear they may occur only several weeks after infection; the incubation period between exposure and being able to infect others is thought to be on the order of two to six weeks. Symptoms in women may include vaginal discharge or burning with urination. Symptoms in men may include discharge from the penis, burning with urination, or pain and swelling of one or both testicles. The infection can spread to the upper genital tract in women, causing pelvic inflammatory disease, which may result in future infertility or ectopic pregnancy. Chlamydia infections can occur in other areas besides the genitals, including the anus, eyes, throat, and lymph nodes. Repeated chlamydia infections of the eyes that go without treatment can result in trachoma, a common cause of blindness in the developing world. Chlamydia can be spread during vaginal, anal, oral, or manual sex and can be passed from an infected mother to her baby during childbirth. The eye infections may also be spread by personal contact, flies, and contaminated towels in areas with poor sanitation. Infection by the bacterium Chlamydia trachomatis only occurs in humans. Diagnosis is often by screening which is recommended yearly in sexually active women under the age of twenty-five, others at higher risk, and at the first prenatal visit. Testing can be done on the urine or a swab of the cervix, vagina, or urethra. Rectal or mouth swabs are required to diagnose infections in those areas. Prevention is by not having sex, the use of condoms, or having sex with only one other person, who is not infected. Chlamydia can be cured by antibiotics with typically either azithromycin or doxycycline being used. Erythromycin or azithromycin is recommended in babies and during pregnancy. Sexual partners should also be treated, and infected people should be advised not to have sex for seven days and until symptom free. Gonorrhea, syphilis, and HIV should be tested for in those who have been infected. Following treatment people should be tested again after three months. Chlamydia is one of the most common sexually transmitted infections, affecting about 4.2% of women and 2.7% of men worldwide. In 2015, about 61 million new cases occurred globally. In the United States about 1.4 million cases were reported in 2014. Infections are most common among those between the ages of 15 and 25 and are more common in women than men. In 2015 infections resulted in about 200 deaths. The word chlamydia is from the Greek χλαμύδα, meaning 'cloak'. | 2001-11-09T02:21:14Z | 2023-12-06T11:06:38Z | [
"Template:Reflist",
"Template:Cite web",
"Template:Commons category",
"Template:Curlie",
"Template:STD/STI",
"Template:Bacterial cutaneous infections",
"Template:Lang",
"Template:Div col end",
"Template:Authority control",
"Template:Citation needed",
"Template:Div col",
"Template:Legend",
"Template:Citation",
"Template:Short description",
"Template:Other uses",
"Template:TOC limit",
"Template:Cite book",
"Template:Offline",
"Template:Pp",
"Template:Infobox medical condition (new)",
"Template:Cite journal",
"Template:Webarchive",
"Template:Medical resources",
"Template:Main",
"Template:Clear"
] | https://en.wikipedia.org/wiki/Chlamydia |
7,038 | Candidiasis | Candidiasis is a fungal infection due to any type of Candida (a type of yeast). When it affects the mouth, in some countries it is commonly called thrush. Signs and symptoms include white patches on the tongue or other areas of the mouth and throat. Other symptoms may include soreness and problems swallowing. When it affects the vagina, it may be referred to as a yeast infection or thrush. Signs and symptoms include genital itching, burning, and sometimes a white "cottage cheese-like" discharge from the vagina. Yeast infections of the penis are less common and typically present with an itchy rash. Very rarely, yeast infections may become invasive, spreading to other parts of the body. This may result in fevers along with other symptoms depending on the parts involved.
More than 20 types of Candida may cause infection with Candida albicans being the most common. Infections of the mouth are most common among children less than one month old, the elderly, and those with weak immune systems. Conditions that result in a weak immune system include HIV/AIDS, the medications used after organ transplantation, diabetes, and the use of corticosteroids. Other risk factors include during breastfeeding, following antibiotic therapy, and the wearing of dentures. Vaginal infections occur more commonly during pregnancy, in those with weak immune systems, and following antibiotic therapy. Individuals at risk for invasive candidiasis include low birth weight babies, people recovering from surgery, people admitted to intensive care units, and those with an otherwise compromised immune system.
Efforts to prevent infections of the mouth include the use of chlorhexidine mouthwash in those with poor immune function and washing out the mouth following the use of inhaled steroids. Little evidence supports probiotics for either prevention or treatment, even among those with frequent vaginal infections. For infections of the mouth, treatment with topical clotrimazole or nystatin is usually effective. Oral or intravenous fluconazole, itraconazole, or amphotericin B may be used if these do not work. A number of topical antifungal medications may be used for vaginal infections, including clotrimazole. In those with widespread disease, an echinocandin such as caspofungin or micafungin is used. A number of weeks of intravenous amphotericin B may be used as an alternative. In certain groups at very high risk, antifungal medications may be used preventatively, and concomitantly with medications known to precipitate infections.
Infections of the mouth occur in about 6% of babies less than a month old. About 20% of those receiving chemotherapy for cancer and 20% of those with AIDS also develop the disease. About three-quarters of women have at least one yeast infection at some time during their lives. Widespread disease is rare except in those who have risk factors.
Signs and symptoms of candidiasis vary depending on the area affected. Most candidal infections result in minimal complications such as redness, itching, and discomfort, though complications may be severe or even fatal if left untreated in certain populations. In healthy (immunocompetent) persons, candidiasis is usually a localized infection of the skin, fingernails or toenails (onychomycosis), or mucosal membranes, including the oral cavity and pharynx (thrush), esophagus, and the sex organs (vagina, penis, etc.); less commonly in healthy individuals, the gastrointestinal tract, urinary tract, and respiratory tract are sites of candida infection.
In immunocompromised individuals, Candida infections in the esophagus occur more frequently than in healthy individuals and have a higher potential of becoming systemic, causing a much more serious condition, a fungemia called candidemia. Symptoms of esophageal candidiasis include difficulty swallowing, painful swallowing, abdominal pain, nausea, and vomiting.
Infection in the mouth is characterized by white discolorations in the tongue, around the mouth, and in the throat. Irritation may also occur, causing discomfort when swallowing.
Thrush is commonly seen in infants. It is not considered abnormal in infants unless it lasts longer than a few weeks.
Infection of the vagina or vulva may cause severe itching, burning, soreness, irritation, and a whitish or whitish-gray cottage cheese-like discharge. Symptoms of infection of the male genitalia (balanitis thrush) include red skin around the head of the penis, swelling, irritation, itchiness and soreness of the head of the penis, thick, lumpy discharge under the foreskin, unpleasant odour, difficulty retracting the foreskin (phimosis), and pain when passing urine or during sex.
Signs and symptoms of candidiasis in the skin include itching, irritation, and chafing or broken skin.
Common symptoms of gastrointestinal candidiasis in healthy individuals are anal itching, belching, bloating, indigestion, nausea, diarrhea, gas, intestinal cramps, vomiting, and gastric ulcers. Perianal candidiasis can cause anal itching; the lesion can be red, papular, or ulcerative in appearance, and it is not considered to be a sexually transmissible disease. Abnormal proliferation of the candida in the gut may lead to dysbiosis. While it is not yet clear, this alteration may be the source of symptoms generally described as the irritable bowel syndrome, and other gastrointestinal diseases.
Candidiasis can cause a variety of mental symptoms, such as brain fog, memory problems, difficulty concentrating, anxiety, depression, irritability, mood swings, and even psychosis in rare cases. This is because Candida can produce toxins that affect the brain and nervous system, leading to cognitive and emotional problems.
Candida yeasts are generally present in healthy humans, frequently part of the human body's normal oral and intestinal flora, and particularly on the skin; however, their growth is normally limited by the human immune system and by competition of other microorganisms, such as bacteria occupying the same locations in the human body. Candida requires moisture for growth, notably on the skin. For example, wearing wet swimwear for long periods of time is believed to be a risk factor. Candida can also cause diaper rashes in babies. In extreme cases, superficial infections of the skin or mucous membranes may enter the bloodstream and cause systemic Candida infections.
Factors that increase the risk of candidiasis include HIV/AIDS, mononucleosis, cancer treatments, steroids, stress, antibiotic therapy, diabetes, and nutrient deficiency. Hormone replacement therapy and infertility treatments may also be predisposing factors. Use of inhaled corticosteroids increases risk of candidiasis of the mouth. Inhaled corticosteroids with other risk factors such as antibiotics, oral glucocorticoids, not rinsing mouth after use of inhaled corticosteroids or high dose of inhaled corticosteroids put people at even higher risk. Treatment with antibiotics can lead to eliminating the yeast's natural competitors for resources in the oral and intestinal flora, thereby increasing the severity of the condition. A weakened or undeveloped immune system or metabolic illnesses are significant predisposing factors of candidiasis. Almost 15% of people with weakened immune systems develop a systemic illness caused by Candida species. Diets high in simple carbohydrates have been found to affect rates of oral candidiases.
C. albicans was isolated from the vaginas of 19% of apparently healthy women, i.e., those who experienced few or no symptoms of infection. External use of detergents or douches or internal disturbances (hormonal or physiological) can perturb the normal vaginal flora, consisting of lactic acid bacteria, such as lactobacilli, and result in an overgrowth of Candida cells, causing symptoms of infection, such as local inflammation. Pregnancy and the use of oral contraceptives have been reported as risk factors. Diabetes mellitus and the use of antibiotics are also linked to increased rates of yeast infections.
In penile candidiasis, the causes include sexual intercourse with an infected individual, low immunity, antibiotics, and diabetes. Male genital yeast infections are less common, but a yeast infection on the penis caused from direct contact via sexual intercourse with an infected partner is not uncommon.
Breast-feeding mothers may also develop candidiasis on and around the nipple as a result of moisture created by excessive milk-production.
Vaginal candidiasis can cause congenital candidiasis in newborns.
In oral candidiasis, simply inspecting the person's mouth for white patches and irritation may make the diagnosis. A sample of the infected area may also be taken to determine what organism is causing the infection.
Symptoms of vaginal candidiasis are also present in the more common bacterial vaginosis; aerobic vaginitis is distinct and should be excluded in the differential diagnosis. In a 2002 study, only 33% of women who were self-treating for a yeast infection were found to have such an infection, while most had either bacterial vaginosis or a mixed-type infection.
Diagnosis of a yeast infection is confirmed either via microscopic examination or culturing. For identification by light microscopy, a scraping or swab of the affected area is placed on a microscope slide. A single drop of 10% potassium hydroxide (KOH) solution is then added to the specimen. The KOH dissolves the skin cells, but leaves the Candida cells intact, permitting visualization of pseudohyphae and budding yeast cells typical of many Candida species.
For the culturing method, a sterile swab is rubbed on the infected skin surface. The swab is then streaked on a culture medium. The culture is incubated at 37 °C (98.6 °F) for several days, to allow development of yeast or bacterial colonies. The characteristics (such as morphology and colour) of the colonies may allow initial diagnosis of the organism causing disease symptoms. Respiratory, gastrointestinal, and esophageal candidiasis require an endoscopy to diagnose. For gastrointestinal candidiasis, it is necessary to obtain a 3–5 milliliter sample of fluid from the duodenum for fungal culture. The diagnosis of gastrointestinal candidiasis is based upon the culture containing in excess of 1,000 colony-forming units per milliliter.
Candidiasis may be divided into these types:
A diet that supports the immune system and is not high in simple carbohydrates contributes to a healthy balance of the oral and intestinal flora. While yeast infections are associated with diabetes, the level of blood sugar control may not affect the risk. Wearing cotton underwear may help to reduce the risk of developing skin and vaginal yeast infections, along with not wearing wet clothes for long periods of time. For women who experience recurrent yeast infections, there is limited evidence that oral or intravaginal probiotics help to prevent future infections. This includes either as pills or as yogurt.
Oral hygiene can help prevent oral candidiasis when people have a weakened immune system. For people undergoing cancer treatment, chlorhexidine mouthwash can prevent or reduce thrush. People who use inhaled corticosteroids can reduce the risk of developing oral candidiasis by rinsing the mouth with water or mouthwash after using the inhaler. People with dentures should also disinfect their dentures regularly to prevent oral candidiasis.
Candidiasis is treated with antifungal medications; these include clotrimazole, nystatin, fluconazole, voriconazole, amphotericin B, and echinocandins. Intravenous fluconazole or an intravenous echinocandin such as caspofungin are commonly used to treat immunocompromised or critically ill individuals.
The 2016 revision of the clinical practice guideline for the management of candidiasis lists a large number of specific treatment regimens for Candida infections that involve different Candida species, forms of antifungal drug resistance, immune statuses, and infection localization and severity. Gastrointestinal candidiasis in immunocompetent individuals is treated with 100–200 mg fluconazole per day for 2–3 weeks.
Mouth and throat candidiasis are treated with antifungal medication. Oral candidiasis usually responds to topical treatments; otherwise, systemic antifungal medication may be needed for oral infections. Candidal skin infections in the skin folds (candidal intertrigo) typically respond well to topical antifungal treatments (e.g., nystatin or miconazole). For breastfeeding mothers topical miconazole is the most effective treatment for treating candidiasis on the breasts. Gentian violet can be used for thrush in breastfeeding babies. Systemic treatment with antifungals by mouth is reserved for severe cases or if treatment with topical therapy is unsuccessful. Candida esophagitis may be treated orally or intravenously; for severe or azole-resistant esophageal candidiasis, treatment with amphotericin B may be necessary.
Vaginal yeast infections are typically treated with topical antifungal agents. Penile yeast infections are also treated with antifungal agents, but while an internal treatment may be used (such as a pessary) for vaginal yeast infections, only external treatments – such as a cream – can be recommended for penile treatment. A one-time dose of fluconazole by mouth is 90% effective in treating a vaginal yeast infection. For severe nonrecurring cases, several doses of fluconazole is recommended. Local treatment may include vaginal suppositories or medicated douches. Other types of yeast infections require different dosing. C. albicans can develop resistance to fluconazole, this being more of an issue in those with HIV/AIDS who are often treated with multiple courses of fluconazole for recurrent oral infections.
For vaginal yeast infection in pregnancy, topical imidazole or triazole antifungals are considered the therapy of choice owing to available safety data. Systemic absorption of these topical formulations is minimal, posing little risk of transplacental transfer. In vaginal yeast infection in pregnancy, treatment with topical azole antifungals is recommended for seven days instead of a shorter duration.
For vaginal yeast infections, many complementary treatments are proposed, however a number have side effects. No benefit from probiotics has been found for active infections.
Treatment typically consists of oral or intravenous antifungal medications. In candidal infections of the blood, intravenous fluconazole or an echinocandin such as caspofungin may be used. Amphotericin B is another option.
In hospitalized patients who develop candidemia, age is an important prognostic factor. Mortality following candidemia is 50% in patients aged ≥75 years and 24% in patients aged <75 years. Among individuals being treated in intensive care units, the mortality rate is about 30–50% when systemic candidiasis develops.
Oral candidiasis is the most common fungal infection of the mouth, and it also represents the most common opportunistic oral infection in humans. Infections of the mouth occur in about 6% of babies less than a month old. About 20% of those receiving chemotherapy for cancer and 20% of those with AIDS also develop the disease.
It is estimated that 20% of women may be asymptomatically colonized by vaginal yeast. In the United States there are approximately 1.4 million doctor office visits every year for candidiasis. About three-quarters of women have at least one yeast infection at some time during their lives.
Esophageal candidiasis is the most common esophageal infection in persons with AIDS and accounts for about 50% of all esophageal infections, often coexisting with other esophageal diseases. About two-thirds of people with AIDS and esophageal candidiasis also have oral candidiasis.
Candidal sepsis is rare. Candida is the fourth most common cause of bloodstream infections among hospital patients in the United States. The incidence of bloodstream candida in intensive care units varies widely between countries.
Descriptions of what sounds like oral thrush go back to the time of Hippocrates circa 460–370 BCE.
The first description of a fungus as the causative agent of an oropharyngeal and oesophageal candidosis was by Bernhard von Langenbeck in 1839.
Vulvovaginal candidiasis was first described in 1849 by Wilkinson. In 1875, Haussmann demonstrated the causative organism in both vulvovaginal and oral candidiasis is the same.
With the advent of antibiotics following World War II, the rates of candidiasis increased. The rates then decreased in the 1950s following the development of nystatin.
The colloquial term "thrush" refers to the resemblance of the white flecks present in some forms of candidiasis (e.g., pseudomembranous candidiasis) with the breast of the bird of the same name. The term candidosis is largely used in British English, and candidiasis in American English. Candida is also pronounced differently; in American English, the stress is on the "i", whereas in British English the stress is on the first syllable.
The genus Candida and species C. albicans were described by botanist Christine Marie Berkhout in her doctoral thesis at the University of Utrecht in 1923. Over the years, the classification of the genera and species has evolved. Obsolete names for this genus include Mycotorula and Torulopsis. The species has also been known in the past as Monilia albicans and Oidium albicans. The current classification is nomen conservandum, which means the name is authorized for use by the International Botanical Congress (IBC).
The genus Candida includes about 150 different species. However, only a few are known to cause human infections. C. albicans is the most significant pathogenic species. Other species pathogenic in humans include C. auris, C. tropicalis, C. parapsilosis, C. dubliniensis, and C. lusitaniae.
The name Candida was proposed by Berkhout. It is from the Latin word toga candida, referring to the white toga (robe) worn by candidates for the Senate of the ancient Roman republic. The specific epithet albicans also comes from Latin, albicare meaning "to whiten". These names refer to the generally white appearance of Candida species when cultured.
A 2005 publication noted that "a large pseudoscientific cult" has developed around the topic of Candida, with claims stating that up to one in three people are affected by yeast-related illness, particularly a condition called "Candidiasis hypersensitivity". Some practitioners of alternative medicine have promoted these purported conditions and sold dietary supplements as supposed cures; a number of them have been prosecuted. In 1990, alternative health vendor Nature's Way signed an FTC consent agreement not to misrepresent in advertising any self-diagnostic test concerning yeast conditions or to make any unsubstantiated representation concerning any food or supplement's ability to control yeast conditions, with a fine of $30,000 payable to the National Institutes of Health for research in genuine candidiasis.
High level Candida colonization is linked to several diseases of the gastrointestinal tract including Crohn's disease.
There has been an increase in resistance to antifungals worldwide over the past 30–40 years. | [
{
"paragraph_id": 0,
"text": "Candidiasis is a fungal infection due to any type of Candida (a type of yeast). When it affects the mouth, in some countries it is commonly called thrush. Signs and symptoms include white patches on the tongue or other areas of the mouth and throat. Other symptoms may include soreness and problems swallowing. When it affects the vagina, it may be referred to as a yeast infection or thrush. Signs and symptoms include genital itching, burning, and sometimes a white \"cottage cheese-like\" discharge from the vagina. Yeast infections of the penis are less common and typically present with an itchy rash. Very rarely, yeast infections may become invasive, spreading to other parts of the body. This may result in fevers along with other symptoms depending on the parts involved.",
"title": ""
},
{
"paragraph_id": 1,
"text": "More than 20 types of Candida may cause infection with Candida albicans being the most common. Infections of the mouth are most common among children less than one month old, the elderly, and those with weak immune systems. Conditions that result in a weak immune system include HIV/AIDS, the medications used after organ transplantation, diabetes, and the use of corticosteroids. Other risk factors include during breastfeeding, following antibiotic therapy, and the wearing of dentures. Vaginal infections occur more commonly during pregnancy, in those with weak immune systems, and following antibiotic therapy. Individuals at risk for invasive candidiasis include low birth weight babies, people recovering from surgery, people admitted to intensive care units, and those with an otherwise compromised immune system.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Efforts to prevent infections of the mouth include the use of chlorhexidine mouthwash in those with poor immune function and washing out the mouth following the use of inhaled steroids. Little evidence supports probiotics for either prevention or treatment, even among those with frequent vaginal infections. For infections of the mouth, treatment with topical clotrimazole or nystatin is usually effective. Oral or intravenous fluconazole, itraconazole, or amphotericin B may be used if these do not work. A number of topical antifungal medications may be used for vaginal infections, including clotrimazole. In those with widespread disease, an echinocandin such as caspofungin or micafungin is used. A number of weeks of intravenous amphotericin B may be used as an alternative. In certain groups at very high risk, antifungal medications may be used preventatively, and concomitantly with medications known to precipitate infections.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Infections of the mouth occur in about 6% of babies less than a month old. About 20% of those receiving chemotherapy for cancer and 20% of those with AIDS also develop the disease. About three-quarters of women have at least one yeast infection at some time during their lives. Widespread disease is rare except in those who have risk factors.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Signs and symptoms of candidiasis vary depending on the area affected. Most candidal infections result in minimal complications such as redness, itching, and discomfort, though complications may be severe or even fatal if left untreated in certain populations. In healthy (immunocompetent) persons, candidiasis is usually a localized infection of the skin, fingernails or toenails (onychomycosis), or mucosal membranes, including the oral cavity and pharynx (thrush), esophagus, and the sex organs (vagina, penis, etc.); less commonly in healthy individuals, the gastrointestinal tract, urinary tract, and respiratory tract are sites of candida infection.",
"title": "Signs and symptoms"
},
{
"paragraph_id": 5,
"text": "In immunocompromised individuals, Candida infections in the esophagus occur more frequently than in healthy individuals and have a higher potential of becoming systemic, causing a much more serious condition, a fungemia called candidemia. Symptoms of esophageal candidiasis include difficulty swallowing, painful swallowing, abdominal pain, nausea, and vomiting.",
"title": "Signs and symptoms"
},
{
"paragraph_id": 6,
"text": "Infection in the mouth is characterized by white discolorations in the tongue, around the mouth, and in the throat. Irritation may also occur, causing discomfort when swallowing.",
"title": "Signs and symptoms"
},
{
"paragraph_id": 7,
"text": "Thrush is commonly seen in infants. It is not considered abnormal in infants unless it lasts longer than a few weeks.",
"title": "Signs and symptoms"
},
{
"paragraph_id": 8,
"text": "Infection of the vagina or vulva may cause severe itching, burning, soreness, irritation, and a whitish or whitish-gray cottage cheese-like discharge. Symptoms of infection of the male genitalia (balanitis thrush) include red skin around the head of the penis, swelling, irritation, itchiness and soreness of the head of the penis, thick, lumpy discharge under the foreskin, unpleasant odour, difficulty retracting the foreskin (phimosis), and pain when passing urine or during sex.",
"title": "Signs and symptoms"
},
{
"paragraph_id": 9,
"text": "Signs and symptoms of candidiasis in the skin include itching, irritation, and chafing or broken skin.",
"title": "Signs and symptoms"
},
{
"paragraph_id": 10,
"text": "Common symptoms of gastrointestinal candidiasis in healthy individuals are anal itching, belching, bloating, indigestion, nausea, diarrhea, gas, intestinal cramps, vomiting, and gastric ulcers. Perianal candidiasis can cause anal itching; the lesion can be red, papular, or ulcerative in appearance, and it is not considered to be a sexually transmissible disease. Abnormal proliferation of the candida in the gut may lead to dysbiosis. While it is not yet clear, this alteration may be the source of symptoms generally described as the irritable bowel syndrome, and other gastrointestinal diseases.",
"title": "Signs and symptoms"
},
{
"paragraph_id": 11,
"text": "Candidiasis can cause a variety of mental symptoms, such as brain fog, memory problems, difficulty concentrating, anxiety, depression, irritability, mood swings, and even psychosis in rare cases. This is because Candida can produce toxins that affect the brain and nervous system, leading to cognitive and emotional problems.",
"title": "Signs and symptoms"
},
{
"paragraph_id": 12,
"text": "Candida yeasts are generally present in healthy humans, frequently part of the human body's normal oral and intestinal flora, and particularly on the skin; however, their growth is normally limited by the human immune system and by competition of other microorganisms, such as bacteria occupying the same locations in the human body. Candida requires moisture for growth, notably on the skin. For example, wearing wet swimwear for long periods of time is believed to be a risk factor. Candida can also cause diaper rashes in babies. In extreme cases, superficial infections of the skin or mucous membranes may enter the bloodstream and cause systemic Candida infections.",
"title": "Causes"
},
{
"paragraph_id": 13,
"text": "Factors that increase the risk of candidiasis include HIV/AIDS, mononucleosis, cancer treatments, steroids, stress, antibiotic therapy, diabetes, and nutrient deficiency. Hormone replacement therapy and infertility treatments may also be predisposing factors. Use of inhaled corticosteroids increases risk of candidiasis of the mouth. Inhaled corticosteroids with other risk factors such as antibiotics, oral glucocorticoids, not rinsing mouth after use of inhaled corticosteroids or high dose of inhaled corticosteroids put people at even higher risk. Treatment with antibiotics can lead to eliminating the yeast's natural competitors for resources in the oral and intestinal flora, thereby increasing the severity of the condition. A weakened or undeveloped immune system or metabolic illnesses are significant predisposing factors of candidiasis. Almost 15% of people with weakened immune systems develop a systemic illness caused by Candida species. Diets high in simple carbohydrates have been found to affect rates of oral candidiases.",
"title": "Causes"
},
{
"paragraph_id": 14,
"text": "C. albicans was isolated from the vaginas of 19% of apparently healthy women, i.e., those who experienced few or no symptoms of infection. External use of detergents or douches or internal disturbances (hormonal or physiological) can perturb the normal vaginal flora, consisting of lactic acid bacteria, such as lactobacilli, and result in an overgrowth of Candida cells, causing symptoms of infection, such as local inflammation. Pregnancy and the use of oral contraceptives have been reported as risk factors. Diabetes mellitus and the use of antibiotics are also linked to increased rates of yeast infections.",
"title": "Causes"
},
{
"paragraph_id": 15,
"text": "In penile candidiasis, the causes include sexual intercourse with an infected individual, low immunity, antibiotics, and diabetes. Male genital yeast infections are less common, but a yeast infection on the penis caused from direct contact via sexual intercourse with an infected partner is not uncommon.",
"title": "Causes"
},
{
"paragraph_id": 16,
"text": "Breast-feeding mothers may also develop candidiasis on and around the nipple as a result of moisture created by excessive milk-production.",
"title": "Causes"
},
{
"paragraph_id": 17,
"text": "Vaginal candidiasis can cause congenital candidiasis in newborns.",
"title": "Causes"
},
{
"paragraph_id": 18,
"text": "In oral candidiasis, simply inspecting the person's mouth for white patches and irritation may make the diagnosis. A sample of the infected area may also be taken to determine what organism is causing the infection.",
"title": "Diagnosis"
},
{
"paragraph_id": 19,
"text": "Symptoms of vaginal candidiasis are also present in the more common bacterial vaginosis; aerobic vaginitis is distinct and should be excluded in the differential diagnosis. In a 2002 study, only 33% of women who were self-treating for a yeast infection were found to have such an infection, while most had either bacterial vaginosis or a mixed-type infection.",
"title": "Diagnosis"
},
{
"paragraph_id": 20,
"text": "Diagnosis of a yeast infection is confirmed either via microscopic examination or culturing. For identification by light microscopy, a scraping or swab of the affected area is placed on a microscope slide. A single drop of 10% potassium hydroxide (KOH) solution is then added to the specimen. The KOH dissolves the skin cells, but leaves the Candida cells intact, permitting visualization of pseudohyphae and budding yeast cells typical of many Candida species.",
"title": "Diagnosis"
},
{
"paragraph_id": 21,
"text": "For the culturing method, a sterile swab is rubbed on the infected skin surface. The swab is then streaked on a culture medium. The culture is incubated at 37 °C (98.6 °F) for several days, to allow development of yeast or bacterial colonies. The characteristics (such as morphology and colour) of the colonies may allow initial diagnosis of the organism causing disease symptoms. Respiratory, gastrointestinal, and esophageal candidiasis require an endoscopy to diagnose. For gastrointestinal candidiasis, it is necessary to obtain a 3–5 milliliter sample of fluid from the duodenum for fungal culture. The diagnosis of gastrointestinal candidiasis is based upon the culture containing in excess of 1,000 colony-forming units per milliliter.",
"title": "Diagnosis"
},
{
"paragraph_id": 22,
"text": "Candidiasis may be divided into these types:",
"title": "Diagnosis"
},
{
"paragraph_id": 23,
"text": "A diet that supports the immune system and is not high in simple carbohydrates contributes to a healthy balance of the oral and intestinal flora. While yeast infections are associated with diabetes, the level of blood sugar control may not affect the risk. Wearing cotton underwear may help to reduce the risk of developing skin and vaginal yeast infections, along with not wearing wet clothes for long periods of time. For women who experience recurrent yeast infections, there is limited evidence that oral or intravaginal probiotics help to prevent future infections. This includes either as pills or as yogurt.",
"title": "Prevention"
},
{
"paragraph_id": 24,
"text": "Oral hygiene can help prevent oral candidiasis when people have a weakened immune system. For people undergoing cancer treatment, chlorhexidine mouthwash can prevent or reduce thrush. People who use inhaled corticosteroids can reduce the risk of developing oral candidiasis by rinsing the mouth with water or mouthwash after using the inhaler. People with dentures should also disinfect their dentures regularly to prevent oral candidiasis.",
"title": "Prevention"
},
{
"paragraph_id": 25,
"text": "Candidiasis is treated with antifungal medications; these include clotrimazole, nystatin, fluconazole, voriconazole, amphotericin B, and echinocandins. Intravenous fluconazole or an intravenous echinocandin such as caspofungin are commonly used to treat immunocompromised or critically ill individuals.",
"title": "Treatment"
},
{
"paragraph_id": 26,
"text": "The 2016 revision of the clinical practice guideline for the management of candidiasis lists a large number of specific treatment regimens for Candida infections that involve different Candida species, forms of antifungal drug resistance, immune statuses, and infection localization and severity. Gastrointestinal candidiasis in immunocompetent individuals is treated with 100–200 mg fluconazole per day for 2–3 weeks.",
"title": "Treatment"
},
{
"paragraph_id": 27,
"text": "Mouth and throat candidiasis are treated with antifungal medication. Oral candidiasis usually responds to topical treatments; otherwise, systemic antifungal medication may be needed for oral infections. Candidal skin infections in the skin folds (candidal intertrigo) typically respond well to topical antifungal treatments (e.g., nystatin or miconazole). For breastfeeding mothers topical miconazole is the most effective treatment for treating candidiasis on the breasts. Gentian violet can be used for thrush in breastfeeding babies. Systemic treatment with antifungals by mouth is reserved for severe cases or if treatment with topical therapy is unsuccessful. Candida esophagitis may be treated orally or intravenously; for severe or azole-resistant esophageal candidiasis, treatment with amphotericin B may be necessary.",
"title": "Treatment"
},
{
"paragraph_id": 28,
"text": "Vaginal yeast infections are typically treated with topical antifungal agents. Penile yeast infections are also treated with antifungal agents, but while an internal treatment may be used (such as a pessary) for vaginal yeast infections, only external treatments – such as a cream – can be recommended for penile treatment. A one-time dose of fluconazole by mouth is 90% effective in treating a vaginal yeast infection. For severe nonrecurring cases, several doses of fluconazole is recommended. Local treatment may include vaginal suppositories or medicated douches. Other types of yeast infections require different dosing. C. albicans can develop resistance to fluconazole, this being more of an issue in those with HIV/AIDS who are often treated with multiple courses of fluconazole for recurrent oral infections.",
"title": "Treatment"
},
{
"paragraph_id": 29,
"text": "For vaginal yeast infection in pregnancy, topical imidazole or triazole antifungals are considered the therapy of choice owing to available safety data. Systemic absorption of these topical formulations is minimal, posing little risk of transplacental transfer. In vaginal yeast infection in pregnancy, treatment with topical azole antifungals is recommended for seven days instead of a shorter duration.",
"title": "Treatment"
},
{
"paragraph_id": 30,
"text": "For vaginal yeast infections, many complementary treatments are proposed, however a number have side effects. No benefit from probiotics has been found for active infections.",
"title": "Treatment"
},
{
"paragraph_id": 31,
"text": "Treatment typically consists of oral or intravenous antifungal medications. In candidal infections of the blood, intravenous fluconazole or an echinocandin such as caspofungin may be used. Amphotericin B is another option.",
"title": "Treatment"
},
{
"paragraph_id": 32,
"text": "In hospitalized patients who develop candidemia, age is an important prognostic factor. Mortality following candidemia is 50% in patients aged ≥75 years and 24% in patients aged <75 years. Among individuals being treated in intensive care units, the mortality rate is about 30–50% when systemic candidiasis develops.",
"title": "Prognosis"
},
{
"paragraph_id": 33,
"text": "Oral candidiasis is the most common fungal infection of the mouth, and it also represents the most common opportunistic oral infection in humans. Infections of the mouth occur in about 6% of babies less than a month old. About 20% of those receiving chemotherapy for cancer and 20% of those with AIDS also develop the disease.",
"title": "Epidemiology"
},
{
"paragraph_id": 34,
"text": "It is estimated that 20% of women may be asymptomatically colonized by vaginal yeast. In the United States there are approximately 1.4 million doctor office visits every year for candidiasis. About three-quarters of women have at least one yeast infection at some time during their lives.",
"title": "Epidemiology"
},
{
"paragraph_id": 35,
"text": "Esophageal candidiasis is the most common esophageal infection in persons with AIDS and accounts for about 50% of all esophageal infections, often coexisting with other esophageal diseases. About two-thirds of people with AIDS and esophageal candidiasis also have oral candidiasis.",
"title": "Epidemiology"
},
{
"paragraph_id": 36,
"text": "Candidal sepsis is rare. Candida is the fourth most common cause of bloodstream infections among hospital patients in the United States. The incidence of bloodstream candida in intensive care units varies widely between countries.",
"title": "Epidemiology"
},
{
"paragraph_id": 37,
"text": "Descriptions of what sounds like oral thrush go back to the time of Hippocrates circa 460–370 BCE.",
"title": "History"
},
{
"paragraph_id": 38,
"text": "The first description of a fungus as the causative agent of an oropharyngeal and oesophageal candidosis was by Bernhard von Langenbeck in 1839.",
"title": "History"
},
{
"paragraph_id": 39,
"text": "Vulvovaginal candidiasis was first described in 1849 by Wilkinson. In 1875, Haussmann demonstrated the causative organism in both vulvovaginal and oral candidiasis is the same.",
"title": "History"
},
{
"paragraph_id": 40,
"text": "With the advent of antibiotics following World War II, the rates of candidiasis increased. The rates then decreased in the 1950s following the development of nystatin.",
"title": "History"
},
{
"paragraph_id": 41,
"text": "The colloquial term \"thrush\" refers to the resemblance of the white flecks present in some forms of candidiasis (e.g., pseudomembranous candidiasis) with the breast of the bird of the same name. The term candidosis is largely used in British English, and candidiasis in American English. Candida is also pronounced differently; in American English, the stress is on the \"i\", whereas in British English the stress is on the first syllable.",
"title": "History"
},
{
"paragraph_id": 42,
"text": "The genus Candida and species C. albicans were described by botanist Christine Marie Berkhout in her doctoral thesis at the University of Utrecht in 1923. Over the years, the classification of the genera and species has evolved. Obsolete names for this genus include Mycotorula and Torulopsis. The species has also been known in the past as Monilia albicans and Oidium albicans. The current classification is nomen conservandum, which means the name is authorized for use by the International Botanical Congress (IBC).",
"title": "History"
},
{
"paragraph_id": 43,
"text": "The genus Candida includes about 150 different species. However, only a few are known to cause human infections. C. albicans is the most significant pathogenic species. Other species pathogenic in humans include C. auris, C. tropicalis, C. parapsilosis, C. dubliniensis, and C. lusitaniae.",
"title": "History"
},
{
"paragraph_id": 44,
"text": "The name Candida was proposed by Berkhout. It is from the Latin word toga candida, referring to the white toga (robe) worn by candidates for the Senate of the ancient Roman republic. The specific epithet albicans also comes from Latin, albicare meaning \"to whiten\". These names refer to the generally white appearance of Candida species when cultured.",
"title": "History"
},
{
"paragraph_id": 45,
"text": "A 2005 publication noted that \"a large pseudoscientific cult\" has developed around the topic of Candida, with claims stating that up to one in three people are affected by yeast-related illness, particularly a condition called \"Candidiasis hypersensitivity\". Some practitioners of alternative medicine have promoted these purported conditions and sold dietary supplements as supposed cures; a number of them have been prosecuted. In 1990, alternative health vendor Nature's Way signed an FTC consent agreement not to misrepresent in advertising any self-diagnostic test concerning yeast conditions or to make any unsubstantiated representation concerning any food or supplement's ability to control yeast conditions, with a fine of $30,000 payable to the National Institutes of Health for research in genuine candidiasis.",
"title": "Alternative medicine"
},
{
"paragraph_id": 46,
"text": "High level Candida colonization is linked to several diseases of the gastrointestinal tract including Crohn's disease.",
"title": "Research"
},
{
"paragraph_id": 47,
"text": "There has been an increase in resistance to antifungals worldwide over the past 30–40 years.",
"title": "Research"
}
] | Candidiasis is a fungal infection due to any type of Candida. When it affects the mouth, in some countries it is commonly called thrush. Signs and symptoms include white patches on the tongue or other areas of the mouth and throat. Other symptoms may include soreness and problems swallowing. When it affects the vagina, it may be referred to as a yeast infection or thrush. Signs and symptoms include genital itching, burning, and sometimes a white "cottage cheese-like" discharge from the vagina. Yeast infections of the penis are less common and typically present with an itchy rash. Very rarely, yeast infections may become invasive, spreading to other parts of the body. This may result in fevers along with other symptoms depending on the parts involved. More than 20 types of Candida may cause infection with Candida albicans being the most common. Infections of the mouth are most common among children less than one month old, the elderly, and those with weak immune systems. Conditions that result in a weak immune system include HIV/AIDS, the medications used after organ transplantation, diabetes, and the use of corticosteroids. Other risk factors include during breastfeeding, following antibiotic therapy, and the wearing of dentures. Vaginal infections occur more commonly during pregnancy, in those with weak immune systems, and following antibiotic therapy. Individuals at risk for invasive candidiasis include low birth weight babies, people recovering from surgery, people admitted to intensive care units, and those with an otherwise compromised immune system. Efforts to prevent infections of the mouth include the use of chlorhexidine mouthwash in those with poor immune function and washing out the mouth following the use of inhaled steroids. Little evidence supports probiotics for either prevention or treatment, even among those with frequent vaginal infections. For infections of the mouth, treatment with topical clotrimazole or nystatin is usually effective. Oral or intravenous fluconazole, itraconazole, or amphotericin B may be used if these do not work. A number of topical antifungal medications may be used for vaginal infections, including clotrimazole. In those with widespread disease, an echinocandin such as caspofungin or micafungin is used. A number of weeks of intravenous amphotericin B may be used as an alternative. In certain groups at very high risk, antifungal medications may be used preventatively, and concomitantly with medications known to precipitate infections. Infections of the mouth occur in about 6% of babies less than a month old. About 20% of those receiving chemotherapy for cancer and 20% of those with AIDS also develop the disease. About three-quarters of women have at least one yeast infection at some time during their lives. Widespread disease is rare except in those who have risk factors. | 2001-11-07T15:27:48Z | 2023-12-15T06:03:53Z | [
"Template:Cite journal",
"Template:Webarchive",
"Template:MedlinePlusEncyclopedia",
"Template:Cite news",
"Template:Sisterlinks",
"Template:Short description",
"Template:Redirect",
"Template:Cite book",
"Template:Diseases of the skin and appendages by morphology",
"Template:For",
"Template:Cn",
"Template:Dead link",
"Template:Curlie",
"Template:Medical resources",
"Template:Infobox medical condition (new)",
"Template:Rp",
"Template:Reflist",
"Template:Authority control",
"Template:Main",
"Template:Cite web",
"Template:Mycoses"
] | https://en.wikipedia.org/wiki/Candidiasis |
7,039 | Control theory | Control theory is a field of control engineering and applied mathematics that deals with the control of dynamical systems in engineered processes and machines. The objective is to develop a model or algorithm governing the application of system inputs to drive the system to a desired state, while minimizing any delay, overshoot, or steady-state error and ensuring a level of control stability; often with the aim to achieve a degree of optimality.
To do this, a controller with the requisite corrective behavior is required. This controller monitors the controlled process variable (PV), and compares it with the reference or set point (SP). The difference between actual and desired value of the process variable, called the error signal, or SP-PV error, is applied as feedback to generate a control action to bring the controlled process variable to the same value as the set point. Other aspects which are also studied are controllability and observability. Control theory is used in control system engineering to design automation that have revolutionized manufacturing, aircraft, communications and other industries, and created new fields such as robotics.
Extensive use is usually made of a diagrammatic style known as the block diagram. In it the transfer function, also known as the system function or network function, is a mathematical model of the relation between the input and output based on the differential equations describing the system.
Control theory dates from the 19th century, when the theoretical basis for the operation of governors was first described by James Clerk Maxwell. Control theory was further advanced by Edward Routh in 1874, Charles Sturm and in 1895, Adolf Hurwitz, who all contributed to the establishment of control stability criteria; and from 1922 onwards, the development of PID control theory by Nicolas Minorsky. Although a major application of mathematical control theory is in control systems engineering, which deals with the design of process control systems for industry, other applications range far beyond this. As the general theory of feedback systems, control theory is useful wherever feedback occurs - thus control theory also has applications in life sciences, computer engineering, sociology and operations research.
Although control systems of various types date back to antiquity, a more formal analysis of the field began with a dynamics analysis of the centrifugal governor, conducted by the physicist James Clerk Maxwell in 1868, entitled On Governors. A centrifugal governor was already used to regulate the velocity of windmills. Maxwell described and analyzed the phenomenon of self-oscillation, in which lags in the system may lead to overcompensation and unstable behavior. This generated a flurry of interest in the topic, during which Maxwell's classmate, Edward John Routh, abstracted Maxwell's results for the general class of linear systems. Independently, Adolf Hurwitz analyzed system stability using differential equations in 1877, resulting in what is now known as the Routh–Hurwitz theorem.
A notable application of dynamic control was in the area of crewed flight. The Wright brothers made their first successful test flights on December 17, 1903, and were distinguished by their ability to control their flights for substantial periods (more so than the ability to produce lift from an airfoil, which was known). Continuous, reliable control of the airplane was necessary for flights lasting longer than a few seconds.
By World War II, control theory was becoming an important area of research. Irmgard Flügge-Lotz developed the theory of discontinuous automatic control systems, and applied the bang-bang principle to the development of automatic flight control equipment for aircraft. Other areas of application for discontinuous controls included fire-control systems, guidance systems and electronics.
Sometimes, mechanical methods are used to improve the stability of systems. For example, ship stabilizers are fins mounted beneath the waterline and emerging laterally. In contemporary vessels, they may be gyroscopically controlled active fins, which have the capacity to change their angle of attack to counteract roll caused by wind or waves acting on the ship.
The Space Race also depended on accurate spacecraft control, and control theory has also seen an increasing use in fields such as economics and artificial intelligence. Here, one might say that the goal is to find an internal model that obeys the good regulator theorem. So, for example, in economics, the more accurately a (stock or commodities) trading model represents the actions of the market, the more easily it can control that market (and extract "useful work" (profits) from it). In AI, an example might be a chatbot modelling the discourse state of humans: the more accurately it can model the human state (e.g. on a telephone voice-support hotline), the better it can manipulate the human (e.g. into performing the corrective actions to resolve the problem that caused the phone call to the help-line). These last two examples take the narrow historical interpretation of control theory as a set of differential equations modeling and regulating kinetic motion, and broaden it into a vast generalization of a regulator interacting with a plant.
Fundamentally, there are two types of control loop: open-loop control (feedforward), and closed-loop control (feedback).
In open-loop control, the control action from the controller is independent of the "process output" (or "controlled process variable"). A good example of this is a central heating boiler controlled only by a timer, so that heat is applied for a constant time, regardless of the temperature of the building. The control action is the switching on/off of the boiler, but the controlled variable should be the building temperature, but is not because this is open-loop control of the boiler, which does not give closed-loop control of the temperature.
In closed loop control, the control action from the controller is dependent on the process output. In the case of the boiler analogy this would include a thermostat to monitor the building temperature, and thereby feed back a signal to ensure the controller maintains the building at the temperature set on the thermostat. A closed loop controller therefore has a feedback loop which ensures the controller exerts a control action to give a process output the same as the "reference input" or "set point". For this reason, closed loop controllers are also called feedback controllers.
The definition of a closed loop control system according to the British Standard Institution is "a control system possessing monitoring feedback, the deviation signal formed as a result of this feedback being used to control the action of a final control element in such a way as to tend to reduce the deviation to zero."
A closed-loop controller or feedback controller is a control loop which incorporates feedback, in contrast to an open-loop controller or non-feedback controller. A closed-loop controller uses feedback to control states or outputs of a dynamical system. Its name comes from the information path in the system: process inputs (e.g., voltage applied to an electric motor) have an effect on the process outputs (e.g., speed or torque of the motor), which is measured with sensors and processed by the controller; the result (the control signal) is "fed back" as input to the process, closing the loop.
In the case of linear feedback systems, a control loop including sensors, control algorithms, and actuators is arranged in an attempt to regulate a variable at a setpoint (SP). An everyday example is the cruise control on a road vehicle; where external influences such as hills would cause speed changes, and the driver has the ability to alter the desired set speed. The PID algorithm in the controller restores the actual speed to the desired speed in an optimum way, with minimal delay or overshoot, by controlling the power output of the vehicle's engine. Control systems that include some sensing of the results they are trying to achieve are making use of feedback and can adapt to varying circumstances to some extent. Open-loop control systems do not make use of feedback, and run only in pre-arranged ways.
Closed-loop controllers have the following advantages over open-loop controllers:
In some systems, closed-loop and open-loop control are used simultaneously. In such systems, the open-loop control is termed feedforward and serves to further improve reference tracking performance.
A common closed-loop controller architecture is the PID controller.
The field of control theory can be divided into two branches:
Mathematical techniques for analyzing and designing control systems fall into two different categories:
In contrast to the frequency domain analysis of the classical control theory, modern control theory utilizes the time-domain state space representation, a mathematical model of a physical system as a set of input, output and state variables related by first-order differential equations. To abstract from the number of inputs, outputs, and states, the variables are expressed as vectors and the differential and algebraic equations are written in matrix form (the latter only being possible when the dynamical system is linear). The state space representation (also known as the "time-domain approach") provides a convenient and compact way to model and analyze systems with multiple inputs and outputs. With inputs and outputs, we would otherwise have to write down Laplace transforms to encode all the information about a system. Unlike the frequency domain approach, the use of the state-space representation is not limited to systems with linear components and zero initial conditions. "State space" refers to the space whose axes are the state variables. The state of the system can be represented as a point within that space.
Control systems can be divided into different categories depending on the number of inputs and outputs.
The scope of classical control theory is limited to single-input and single-output (SISO) system design, except when analyzing for disturbance rejection using a second input. The system analysis is carried out in the time domain using differential equations, in the complex-s domain with the Laplace transform, or in the frequency domain by transforming from the complex-s domain. Many systems may be assumed to have a second order and single variable system response in the time domain. A controller designed using classical theory often requires on-site tuning due to incorrect design approximations. Yet, due to the easier physical implementation of classical controller designs as compared to systems designed using modern control theory, these controllers are preferred in most industrial applications. The most common controllers designed using classical control theory are PID controllers. A less common implementation may include either or both a Lead or Lag filter. The ultimate end goal is to meet requirements typically provided in the time-domain called the step response, or at times in the frequency domain called the open-loop response. The step response characteristics applied in a specification are typically percent overshoot, settling time, etc. The open-loop response characteristics applied in a specification are typically Gain and Phase margin and bandwidth. These characteristics may be evaluated through simulation including a dynamic model of the system under control coupled with the compensation model.
Modern control theory is carried out in the state space, and can deal with multiple-input and multiple-output (MIMO) systems. This overcomes the limitations of classical control theory in more sophisticated design problems, such as fighter aircraft control, with the limitation that no frequency domain analysis is possible. In modern design, a system is represented to the greatest advantage as a set of decoupled first order differential equations defined using state variables. Nonlinear, multivariable, adaptive and robust control theories come under this division. Matrix methods are significantly limited for MIMO systems where linear independence cannot be assured in the relationship between inputs and outputs . Being fairly new, modern control theory has many areas yet to be explored. Scholars like Rudolf E. Kálmán and Aleksandr Lyapunov are well known among the people who have shaped modern control theory.
The stability of a general dynamical system with no input can be described with Lyapunov stability criteria.
For simplicity, the following descriptions focus on continuous-time and discrete-time linear systems.
Mathematically, this means that for a causal linear system to be stable all of the poles of its transfer function must have negative-real values, i.e. the real part of each pole must be less than zero. Practically speaking, stability requires that the transfer function complex poles reside
The difference between the two cases is simply due to the traditional method of plotting continuous time versus discrete time transfer functions. The continuous Laplace transform is in Cartesian coordinates where the x {\displaystyle x} axis is the real axis and the discrete Z-transform is in circular coordinates where the ρ {\displaystyle \rho } axis is the real axis.
When the appropriate conditions above are satisfied a system is said to be asymptotically stable; the variables of an asymptotically stable control system always decrease from their initial value and do not show permanent oscillations. Permanent oscillations occur when a pole has a real part exactly equal to zero (in the continuous time case) or a modulus equal to one (in the discrete time case). If a simply stable system response neither decays nor grows over time, and has no oscillations, it is marginally stable; in this case the system transfer function has non-repeated poles at the complex plane origin (i.e. their real and complex component is zero in the continuous time case). Oscillations are present when poles with real part equal to zero have an imaginary part not equal to zero.
If a system in question has an impulse response of
then the Z-transform (see this example), is given by
which has a pole in z = 0.5 {\displaystyle z=0.5} (zero imaginary part). This system is BIBO (asymptotically) stable since the pole is inside the unit circle.
However, if the impulse response was
then the Z-transform is
which has a pole at z = 1.5 {\displaystyle z=1.5} and is not BIBO stable since the pole has a modulus strictly greater than one.
Numerous tools exist for the analysis of the poles of a system. These include graphical systems like the root locus, Bode plots or the Nyquist plots.
Mechanical changes can make equipment (and control systems) more stable. Sailors add ballast to improve the stability of ships. Cruise ships use antiroll fins that extend transversely from the side of the ship for perhaps 30 feet (10 m) and are continuously rotated about their axes to develop forces that oppose the roll.
Controllability and observability are main issues in the analysis of a system before deciding the best control strategy to be applied, or whether it is even possible to control or stabilize the system. Controllability is related to the possibility of forcing the system into a particular state by using an appropriate control signal. If a state is not controllable, then no signal will ever be able to control the state. If a state is not controllable, but its dynamics are stable, then the state is termed stabilizable. Observability instead is related to the possibility of observing, through output measurements, the state of a system. If a state is not observable, the controller will never be able to determine the behavior of an unobservable state and hence cannot use it to stabilize the system. However, similar to the stabilizability condition above, if a state cannot be observed it might still be detectable.
From a geometrical point of view, looking at the states of each variable of the system to be controlled, every "bad" state of these variables must be controllable and observable to ensure a good behavior in the closed-loop system. That is, if one of the eigenvalues of the system is not both controllable and observable, this part of the dynamics will remain untouched in the closed-loop system. If such an eigenvalue is not stable, the dynamics of this eigenvalue will be present in the closed-loop system which therefore will be unstable. Unobservable poles are not present in the transfer function realization of a state-space representation, which is why sometimes the latter is preferred in dynamical systems analysis.
Solutions to problems of an uncontrollable or unobservable system include adding actuators and sensors.
Several different control strategies have been devised in the past years. These vary from extremely general ones (PID controller), to others devoted to very particular classes of systems (especially robotics or aircraft cruise control).
A control problem can have several specifications. Stability, of course, is always present. The controller must ensure that the closed-loop system is stable, regardless of the open-loop stability. A poor choice of controller can even worsen the stability of the open-loop system, which must normally be avoided. Sometimes it would be desired to obtain particular dynamics in the closed loop: i.e. that the poles have R e [ λ ] < − λ ¯ {\displaystyle Re[\lambda ]<-{\overline {\lambda }}} , where λ ¯ {\displaystyle {\overline {\lambda }}} is a fixed value strictly greater than zero, instead of simply asking that R e [ λ ] < 0 {\displaystyle Re[\lambda ]<0} .
Another typical specification is the rejection of a step disturbance; including an integrator in the open-loop chain (i.e. directly before the system under control) easily achieves this. Other classes of disturbances need different types of sub-systems to be included.
Other "classical" control theory specifications regard the time-response of the closed-loop system. These include the rise time (the time needed by the control system to reach the desired value after a perturbation), peak overshoot (the highest value reached by the response before reaching the desired value) and others (settling time, quarter-decay). Frequency domain specifications are usually related to robustness (see after).
Modern performance assessments use some variation of integrated tracking error (IAE, ISA, CQI).
A control system must always have some robustness property. A robust controller is such that its properties do not change much if applied to a system slightly different from the mathematical one used for its synthesis. This requirement is important, as no real physical system truly behaves like the series of differential equations used to represent it mathematically. Typically a simpler mathematical model is chosen in order to simplify calculations, otherwise, the true system dynamics can be so complicated that a complete model is impossible.
The process of determining the equations that govern the model's dynamics is called system identification. This can be done off-line: for example, executing a series of measures from which to calculate an approximated mathematical model, typically its transfer function or matrix. Such identification from the output, however, cannot take account of unobservable dynamics. Sometimes the model is built directly starting from known physical equations, for example, in the case of a mass-spring-damper system we know that m x ¨ ( t ) = − K x ( t ) − B x ˙ ( t ) {\displaystyle m{\ddot {x}}(t)=-Kx(t)-\mathrm {B} {\dot {x}}(t)} . Even assuming that a "complete" model is used in designing the controller, all the parameters included in these equations (called "nominal parameters") are never known with absolute precision; the control system will have to behave correctly even when connected to a physical system with true parameter values away from nominal.
Some advanced control techniques include an "on-line" identification process (see later). The parameters of the model are calculated ("identified") while the controller itself is running. In this way, if a drastic variation of the parameters ensues, for example, if the robot's arm releases a weight, the controller will adjust itself consequently in order to ensure the correct performance.
Analysis of the robustness of a SISO (single input single output) control system can be performed in the frequency domain, considering the system's transfer function and using Nyquist and Bode diagrams. Topics include gain and phase margin and amplitude margin. For MIMO (multi-input multi output) and, in general, more complicated control systems, one must consider the theoretical results devised for each control technique (see next section). I.e., if particular robustness qualities are needed, the engineer must shift their attention to a control technique by including these qualities in its properties.
A particular robustness issue is the requirement for a control system to perform properly in the presence of input and state constraints. In the physical world every signal is limited. It could happen that a controller will send control signals that cannot be followed by the physical system, for example, trying to rotate a valve at excessive speed. This can produce undesired behavior of the closed-loop system, or even damage or break actuators or other subsystems. Specific control techniques are available to solve the problem: model predictive control (see later), and anti-wind up systems. The latter consists of an additional control block that ensures that the control signal never exceeds a given threshold.
For MIMO systems, pole placement can be performed mathematically using a state space representation of the open-loop system and calculating a feedback matrix assigning poles in the desired positions. In complicated systems this can require computer-assisted calculation capabilities, and cannot always ensure robustness. Furthermore, all system states are not in general measured and so observers must be included and incorporated in pole placement design.
Processes in industries like robotics and the aerospace industry typically have strong nonlinear dynamics. In control theory it is sometimes possible to linearize such classes of systems and apply linear techniques, but in many cases it can be necessary to devise from scratch theories permitting control of nonlinear systems. These, e.g., feedback linearization, backstepping, sliding mode control, trajectory linearization control normally take advantage of results based on Lyapunov's theory. Differential geometry has been widely used as a tool for generalizing well-known linear control concepts to the nonlinear case, as well as showing the subtleties that make it a more challenging problem. Control theory has also been used to decipher the neural mechanism that directs cognitive states.
When the system is controlled by multiple controllers, the problem is one of decentralized control. Decentralization is helpful in many ways, for instance, it helps control systems to operate over a larger geographical area. The agents in decentralized control systems can interact using communication channels and coordinate their actions.
A stochastic control problem is one in which the evolution of the state variables is subjected to random shocks from outside the system. A deterministic control problem is not subject to external random shocks.
Every control system must guarantee first the stability of the closed-loop behavior. For linear systems, this can be obtained by directly placing the poles. Nonlinear control systems use specific theories (normally based on Aleksandr Lyapunov's Theory) to ensure stability without regard to the inner dynamics of the system. The possibility to fulfill different specifications varies from the model considered and the control strategy chosen.
Many active and historical figures made significant contribution to control theory including | [
{
"paragraph_id": 0,
"text": "Control theory is a field of control engineering and applied mathematics that deals with the control of dynamical systems in engineered processes and machines. The objective is to develop a model or algorithm governing the application of system inputs to drive the system to a desired state, while minimizing any delay, overshoot, or steady-state error and ensuring a level of control stability; often with the aim to achieve a degree of optimality.",
"title": ""
},
{
"paragraph_id": 1,
"text": "To do this, a controller with the requisite corrective behavior is required. This controller monitors the controlled process variable (PV), and compares it with the reference or set point (SP). The difference between actual and desired value of the process variable, called the error signal, or SP-PV error, is applied as feedback to generate a control action to bring the controlled process variable to the same value as the set point. Other aspects which are also studied are controllability and observability. Control theory is used in control system engineering to design automation that have revolutionized manufacturing, aircraft, communications and other industries, and created new fields such as robotics.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Extensive use is usually made of a diagrammatic style known as the block diagram. In it the transfer function, also known as the system function or network function, is a mathematical model of the relation between the input and output based on the differential equations describing the system.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Control theory dates from the 19th century, when the theoretical basis for the operation of governors was first described by James Clerk Maxwell. Control theory was further advanced by Edward Routh in 1874, Charles Sturm and in 1895, Adolf Hurwitz, who all contributed to the establishment of control stability criteria; and from 1922 onwards, the development of PID control theory by Nicolas Minorsky. Although a major application of mathematical control theory is in control systems engineering, which deals with the design of process control systems for industry, other applications range far beyond this. As the general theory of feedback systems, control theory is useful wherever feedback occurs - thus control theory also has applications in life sciences, computer engineering, sociology and operations research.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Although control systems of various types date back to antiquity, a more formal analysis of the field began with a dynamics analysis of the centrifugal governor, conducted by the physicist James Clerk Maxwell in 1868, entitled On Governors. A centrifugal governor was already used to regulate the velocity of windmills. Maxwell described and analyzed the phenomenon of self-oscillation, in which lags in the system may lead to overcompensation and unstable behavior. This generated a flurry of interest in the topic, during which Maxwell's classmate, Edward John Routh, abstracted Maxwell's results for the general class of linear systems. Independently, Adolf Hurwitz analyzed system stability using differential equations in 1877, resulting in what is now known as the Routh–Hurwitz theorem.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "A notable application of dynamic control was in the area of crewed flight. The Wright brothers made their first successful test flights on December 17, 1903, and were distinguished by their ability to control their flights for substantial periods (more so than the ability to produce lift from an airfoil, which was known). Continuous, reliable control of the airplane was necessary for flights lasting longer than a few seconds.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "By World War II, control theory was becoming an important area of research. Irmgard Flügge-Lotz developed the theory of discontinuous automatic control systems, and applied the bang-bang principle to the development of automatic flight control equipment for aircraft. Other areas of application for discontinuous controls included fire-control systems, guidance systems and electronics.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "Sometimes, mechanical methods are used to improve the stability of systems. For example, ship stabilizers are fins mounted beneath the waterline and emerging laterally. In contemporary vessels, they may be gyroscopically controlled active fins, which have the capacity to change their angle of attack to counteract roll caused by wind or waves acting on the ship.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "The Space Race also depended on accurate spacecraft control, and control theory has also seen an increasing use in fields such as economics and artificial intelligence. Here, one might say that the goal is to find an internal model that obeys the good regulator theorem. So, for example, in economics, the more accurately a (stock or commodities) trading model represents the actions of the market, the more easily it can control that market (and extract \"useful work\" (profits) from it). In AI, an example might be a chatbot modelling the discourse state of humans: the more accurately it can model the human state (e.g. on a telephone voice-support hotline), the better it can manipulate the human (e.g. into performing the corrective actions to resolve the problem that caused the phone call to the help-line). These last two examples take the narrow historical interpretation of control theory as a set of differential equations modeling and regulating kinetic motion, and broaden it into a vast generalization of a regulator interacting with a plant.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "Fundamentally, there are two types of control loop: open-loop control (feedforward), and closed-loop control (feedback).",
"title": "Open-loop and closed-loop (feedback) control"
},
{
"paragraph_id": 10,
"text": "In open-loop control, the control action from the controller is independent of the \"process output\" (or \"controlled process variable\"). A good example of this is a central heating boiler controlled only by a timer, so that heat is applied for a constant time, regardless of the temperature of the building. The control action is the switching on/off of the boiler, but the controlled variable should be the building temperature, but is not because this is open-loop control of the boiler, which does not give closed-loop control of the temperature.",
"title": "Open-loop and closed-loop (feedback) control"
},
{
"paragraph_id": 11,
"text": "In closed loop control, the control action from the controller is dependent on the process output. In the case of the boiler analogy this would include a thermostat to monitor the building temperature, and thereby feed back a signal to ensure the controller maintains the building at the temperature set on the thermostat. A closed loop controller therefore has a feedback loop which ensures the controller exerts a control action to give a process output the same as the \"reference input\" or \"set point\". For this reason, closed loop controllers are also called feedback controllers.",
"title": "Open-loop and closed-loop (feedback) control"
},
{
"paragraph_id": 12,
"text": "The definition of a closed loop control system according to the British Standard Institution is \"a control system possessing monitoring feedback, the deviation signal formed as a result of this feedback being used to control the action of a final control element in such a way as to tend to reduce the deviation to zero.\"",
"title": "Open-loop and closed-loop (feedback) control"
},
{
"paragraph_id": 13,
"text": "A closed-loop controller or feedback controller is a control loop which incorporates feedback, in contrast to an open-loop controller or non-feedback controller. A closed-loop controller uses feedback to control states or outputs of a dynamical system. Its name comes from the information path in the system: process inputs (e.g., voltage applied to an electric motor) have an effect on the process outputs (e.g., speed or torque of the motor), which is measured with sensors and processed by the controller; the result (the control signal) is \"fed back\" as input to the process, closing the loop.",
"title": "Classical control theory"
},
{
"paragraph_id": 14,
"text": "In the case of linear feedback systems, a control loop including sensors, control algorithms, and actuators is arranged in an attempt to regulate a variable at a setpoint (SP). An everyday example is the cruise control on a road vehicle; where external influences such as hills would cause speed changes, and the driver has the ability to alter the desired set speed. The PID algorithm in the controller restores the actual speed to the desired speed in an optimum way, with minimal delay or overshoot, by controlling the power output of the vehicle's engine. Control systems that include some sensing of the results they are trying to achieve are making use of feedback and can adapt to varying circumstances to some extent. Open-loop control systems do not make use of feedback, and run only in pre-arranged ways.",
"title": "Classical control theory"
},
{
"paragraph_id": 15,
"text": "Closed-loop controllers have the following advantages over open-loop controllers:",
"title": "Classical control theory"
},
{
"paragraph_id": 16,
"text": "In some systems, closed-loop and open-loop control are used simultaneously. In such systems, the open-loop control is termed feedforward and serves to further improve reference tracking performance.",
"title": "Classical control theory"
},
{
"paragraph_id": 17,
"text": "A common closed-loop controller architecture is the PID controller.",
"title": "Classical control theory"
},
{
"paragraph_id": 18,
"text": "The field of control theory can be divided into two branches:",
"title": "Linear and nonlinear control theory"
},
{
"paragraph_id": 19,
"text": "Mathematical techniques for analyzing and designing control systems fall into two different categories:",
"title": "Analysis techniques - frequency domain and time domain"
},
{
"paragraph_id": 20,
"text": "In contrast to the frequency domain analysis of the classical control theory, modern control theory utilizes the time-domain state space representation, a mathematical model of a physical system as a set of input, output and state variables related by first-order differential equations. To abstract from the number of inputs, outputs, and states, the variables are expressed as vectors and the differential and algebraic equations are written in matrix form (the latter only being possible when the dynamical system is linear). The state space representation (also known as the \"time-domain approach\") provides a convenient and compact way to model and analyze systems with multiple inputs and outputs. With inputs and outputs, we would otherwise have to write down Laplace transforms to encode all the information about a system. Unlike the frequency domain approach, the use of the state-space representation is not limited to systems with linear components and zero initial conditions. \"State space\" refers to the space whose axes are the state variables. The state of the system can be represented as a point within that space.",
"title": "Analysis techniques - frequency domain and time domain"
},
{
"paragraph_id": 21,
"text": "Control systems can be divided into different categories depending on the number of inputs and outputs.",
"title": "System interfacing - SISO & MIMO"
},
{
"paragraph_id": 22,
"text": "The scope of classical control theory is limited to single-input and single-output (SISO) system design, except when analyzing for disturbance rejection using a second input. The system analysis is carried out in the time domain using differential equations, in the complex-s domain with the Laplace transform, or in the frequency domain by transforming from the complex-s domain. Many systems may be assumed to have a second order and single variable system response in the time domain. A controller designed using classical theory often requires on-site tuning due to incorrect design approximations. Yet, due to the easier physical implementation of classical controller designs as compared to systems designed using modern control theory, these controllers are preferred in most industrial applications. The most common controllers designed using classical control theory are PID controllers. A less common implementation may include either or both a Lead or Lag filter. The ultimate end goal is to meet requirements typically provided in the time-domain called the step response, or at times in the frequency domain called the open-loop response. The step response characteristics applied in a specification are typically percent overshoot, settling time, etc. The open-loop response characteristics applied in a specification are typically Gain and Phase margin and bandwidth. These characteristics may be evaluated through simulation including a dynamic model of the system under control coupled with the compensation model.",
"title": "System interfacing - SISO & MIMO"
},
{
"paragraph_id": 23,
"text": "Modern control theory is carried out in the state space, and can deal with multiple-input and multiple-output (MIMO) systems. This overcomes the limitations of classical control theory in more sophisticated design problems, such as fighter aircraft control, with the limitation that no frequency domain analysis is possible. In modern design, a system is represented to the greatest advantage as a set of decoupled first order differential equations defined using state variables. Nonlinear, multivariable, adaptive and robust control theories come under this division. Matrix methods are significantly limited for MIMO systems where linear independence cannot be assured in the relationship between inputs and outputs . Being fairly new, modern control theory has many areas yet to be explored. Scholars like Rudolf E. Kálmán and Aleksandr Lyapunov are well known among the people who have shaped modern control theory.",
"title": "System interfacing - SISO & MIMO"
},
{
"paragraph_id": 24,
"text": "The stability of a general dynamical system with no input can be described with Lyapunov stability criteria.",
"title": "Topics in control theory"
},
{
"paragraph_id": 25,
"text": "For simplicity, the following descriptions focus on continuous-time and discrete-time linear systems.",
"title": "Topics in control theory"
},
{
"paragraph_id": 26,
"text": "Mathematically, this means that for a causal linear system to be stable all of the poles of its transfer function must have negative-real values, i.e. the real part of each pole must be less than zero. Practically speaking, stability requires that the transfer function complex poles reside",
"title": "Topics in control theory"
},
{
"paragraph_id": 27,
"text": "The difference between the two cases is simply due to the traditional method of plotting continuous time versus discrete time transfer functions. The continuous Laplace transform is in Cartesian coordinates where the x {\\displaystyle x} axis is the real axis and the discrete Z-transform is in circular coordinates where the ρ {\\displaystyle \\rho } axis is the real axis.",
"title": "Topics in control theory"
},
{
"paragraph_id": 28,
"text": "When the appropriate conditions above are satisfied a system is said to be asymptotically stable; the variables of an asymptotically stable control system always decrease from their initial value and do not show permanent oscillations. Permanent oscillations occur when a pole has a real part exactly equal to zero (in the continuous time case) or a modulus equal to one (in the discrete time case). If a simply stable system response neither decays nor grows over time, and has no oscillations, it is marginally stable; in this case the system transfer function has non-repeated poles at the complex plane origin (i.e. their real and complex component is zero in the continuous time case). Oscillations are present when poles with real part equal to zero have an imaginary part not equal to zero.",
"title": "Topics in control theory"
},
{
"paragraph_id": 29,
"text": "If a system in question has an impulse response of",
"title": "Topics in control theory"
},
{
"paragraph_id": 30,
"text": "then the Z-transform (see this example), is given by",
"title": "Topics in control theory"
},
{
"paragraph_id": 31,
"text": "which has a pole in z = 0.5 {\\displaystyle z=0.5} (zero imaginary part). This system is BIBO (asymptotically) stable since the pole is inside the unit circle.",
"title": "Topics in control theory"
},
{
"paragraph_id": 32,
"text": "However, if the impulse response was",
"title": "Topics in control theory"
},
{
"paragraph_id": 33,
"text": "then the Z-transform is",
"title": "Topics in control theory"
},
{
"paragraph_id": 34,
"text": "which has a pole at z = 1.5 {\\displaystyle z=1.5} and is not BIBO stable since the pole has a modulus strictly greater than one.",
"title": "Topics in control theory"
},
{
"paragraph_id": 35,
"text": "Numerous tools exist for the analysis of the poles of a system. These include graphical systems like the root locus, Bode plots or the Nyquist plots.",
"title": "Topics in control theory"
},
{
"paragraph_id": 36,
"text": "Mechanical changes can make equipment (and control systems) more stable. Sailors add ballast to improve the stability of ships. Cruise ships use antiroll fins that extend transversely from the side of the ship for perhaps 30 feet (10 m) and are continuously rotated about their axes to develop forces that oppose the roll.",
"title": "Topics in control theory"
},
{
"paragraph_id": 37,
"text": "Controllability and observability are main issues in the analysis of a system before deciding the best control strategy to be applied, or whether it is even possible to control or stabilize the system. Controllability is related to the possibility of forcing the system into a particular state by using an appropriate control signal. If a state is not controllable, then no signal will ever be able to control the state. If a state is not controllable, but its dynamics are stable, then the state is termed stabilizable. Observability instead is related to the possibility of observing, through output measurements, the state of a system. If a state is not observable, the controller will never be able to determine the behavior of an unobservable state and hence cannot use it to stabilize the system. However, similar to the stabilizability condition above, if a state cannot be observed it might still be detectable.",
"title": "Topics in control theory"
},
{
"paragraph_id": 38,
"text": "From a geometrical point of view, looking at the states of each variable of the system to be controlled, every \"bad\" state of these variables must be controllable and observable to ensure a good behavior in the closed-loop system. That is, if one of the eigenvalues of the system is not both controllable and observable, this part of the dynamics will remain untouched in the closed-loop system. If such an eigenvalue is not stable, the dynamics of this eigenvalue will be present in the closed-loop system which therefore will be unstable. Unobservable poles are not present in the transfer function realization of a state-space representation, which is why sometimes the latter is preferred in dynamical systems analysis.",
"title": "Topics in control theory"
},
{
"paragraph_id": 39,
"text": "Solutions to problems of an uncontrollable or unobservable system include adding actuators and sensors.",
"title": "Topics in control theory"
},
{
"paragraph_id": 40,
"text": "Several different control strategies have been devised in the past years. These vary from extremely general ones (PID controller), to others devoted to very particular classes of systems (especially robotics or aircraft cruise control).",
"title": "Topics in control theory"
},
{
"paragraph_id": 41,
"text": "A control problem can have several specifications. Stability, of course, is always present. The controller must ensure that the closed-loop system is stable, regardless of the open-loop stability. A poor choice of controller can even worsen the stability of the open-loop system, which must normally be avoided. Sometimes it would be desired to obtain particular dynamics in the closed loop: i.e. that the poles have R e [ λ ] < − λ ¯ {\\displaystyle Re[\\lambda ]<-{\\overline {\\lambda }}} , where λ ¯ {\\displaystyle {\\overline {\\lambda }}} is a fixed value strictly greater than zero, instead of simply asking that R e [ λ ] < 0 {\\displaystyle Re[\\lambda ]<0} .",
"title": "Topics in control theory"
},
{
"paragraph_id": 42,
"text": "Another typical specification is the rejection of a step disturbance; including an integrator in the open-loop chain (i.e. directly before the system under control) easily achieves this. Other classes of disturbances need different types of sub-systems to be included.",
"title": "Topics in control theory"
},
{
"paragraph_id": 43,
"text": "Other \"classical\" control theory specifications regard the time-response of the closed-loop system. These include the rise time (the time needed by the control system to reach the desired value after a perturbation), peak overshoot (the highest value reached by the response before reaching the desired value) and others (settling time, quarter-decay). Frequency domain specifications are usually related to robustness (see after).",
"title": "Topics in control theory"
},
{
"paragraph_id": 44,
"text": "Modern performance assessments use some variation of integrated tracking error (IAE, ISA, CQI).",
"title": "Topics in control theory"
},
{
"paragraph_id": 45,
"text": "A control system must always have some robustness property. A robust controller is such that its properties do not change much if applied to a system slightly different from the mathematical one used for its synthesis. This requirement is important, as no real physical system truly behaves like the series of differential equations used to represent it mathematically. Typically a simpler mathematical model is chosen in order to simplify calculations, otherwise, the true system dynamics can be so complicated that a complete model is impossible.",
"title": "Topics in control theory"
},
{
"paragraph_id": 46,
"text": "The process of determining the equations that govern the model's dynamics is called system identification. This can be done off-line: for example, executing a series of measures from which to calculate an approximated mathematical model, typically its transfer function or matrix. Such identification from the output, however, cannot take account of unobservable dynamics. Sometimes the model is built directly starting from known physical equations, for example, in the case of a mass-spring-damper system we know that m x ¨ ( t ) = − K x ( t ) − B x ˙ ( t ) {\\displaystyle m{\\ddot {x}}(t)=-Kx(t)-\\mathrm {B} {\\dot {x}}(t)} . Even assuming that a \"complete\" model is used in designing the controller, all the parameters included in these equations (called \"nominal parameters\") are never known with absolute precision; the control system will have to behave correctly even when connected to a physical system with true parameter values away from nominal.",
"title": "Topics in control theory"
},
{
"paragraph_id": 47,
"text": "Some advanced control techniques include an \"on-line\" identification process (see later). The parameters of the model are calculated (\"identified\") while the controller itself is running. In this way, if a drastic variation of the parameters ensues, for example, if the robot's arm releases a weight, the controller will adjust itself consequently in order to ensure the correct performance.",
"title": "Topics in control theory"
},
{
"paragraph_id": 48,
"text": "Analysis of the robustness of a SISO (single input single output) control system can be performed in the frequency domain, considering the system's transfer function and using Nyquist and Bode diagrams. Topics include gain and phase margin and amplitude margin. For MIMO (multi-input multi output) and, in general, more complicated control systems, one must consider the theoretical results devised for each control technique (see next section). I.e., if particular robustness qualities are needed, the engineer must shift their attention to a control technique by including these qualities in its properties.",
"title": "Topics in control theory"
},
{
"paragraph_id": 49,
"text": "A particular robustness issue is the requirement for a control system to perform properly in the presence of input and state constraints. In the physical world every signal is limited. It could happen that a controller will send control signals that cannot be followed by the physical system, for example, trying to rotate a valve at excessive speed. This can produce undesired behavior of the closed-loop system, or even damage or break actuators or other subsystems. Specific control techniques are available to solve the problem: model predictive control (see later), and anti-wind up systems. The latter consists of an additional control block that ensures that the control signal never exceeds a given threshold.",
"title": "Topics in control theory"
},
{
"paragraph_id": 50,
"text": "For MIMO systems, pole placement can be performed mathematically using a state space representation of the open-loop system and calculating a feedback matrix assigning poles in the desired positions. In complicated systems this can require computer-assisted calculation capabilities, and cannot always ensure robustness. Furthermore, all system states are not in general measured and so observers must be included and incorporated in pole placement design.",
"title": "System classifications"
},
{
"paragraph_id": 51,
"text": "Processes in industries like robotics and the aerospace industry typically have strong nonlinear dynamics. In control theory it is sometimes possible to linearize such classes of systems and apply linear techniques, but in many cases it can be necessary to devise from scratch theories permitting control of nonlinear systems. These, e.g., feedback linearization, backstepping, sliding mode control, trajectory linearization control normally take advantage of results based on Lyapunov's theory. Differential geometry has been widely used as a tool for generalizing well-known linear control concepts to the nonlinear case, as well as showing the subtleties that make it a more challenging problem. Control theory has also been used to decipher the neural mechanism that directs cognitive states.",
"title": "System classifications"
},
{
"paragraph_id": 52,
"text": "When the system is controlled by multiple controllers, the problem is one of decentralized control. Decentralization is helpful in many ways, for instance, it helps control systems to operate over a larger geographical area. The agents in decentralized control systems can interact using communication channels and coordinate their actions.",
"title": "System classifications"
},
{
"paragraph_id": 53,
"text": "A stochastic control problem is one in which the evolution of the state variables is subjected to random shocks from outside the system. A deterministic control problem is not subject to external random shocks.",
"title": "System classifications"
},
{
"paragraph_id": 54,
"text": "Every control system must guarantee first the stability of the closed-loop behavior. For linear systems, this can be obtained by directly placing the poles. Nonlinear control systems use specific theories (normally based on Aleksandr Lyapunov's Theory) to ensure stability without regard to the inner dynamics of the system. The possibility to fulfill different specifications varies from the model considered and the control strategy chosen.",
"title": "Main control strategies"
},
{
"paragraph_id": 55,
"text": "Many active and historical figures made significant contribution to control theory including",
"title": "People in systems and control"
}
] | Control theory is a field of control engineering and applied mathematics that deals with the control of dynamical systems in engineered processes and machines. The objective is to develop a model or algorithm governing the application of system inputs to drive the system to a desired state, while minimizing any delay, overshoot, or steady-state error and ensuring a level of control stability; often with the aim to achieve a degree of optimality. To do this, a controller with the requisite corrective behavior is required. This controller monitors the controlled process variable (PV), and compares it with the reference or set point (SP). The difference between actual and desired value of the process variable, called the error signal, or SP-PV error, is applied as feedback to generate a control action to bring the controlled process variable to the same value as the set point. Other aspects which are also studied are controllability and observability. Control theory is used in control system engineering to design automation that have revolutionized manufacturing, aircraft, communications and other industries, and created new fields such as robotics. Extensive use is usually made of a diagrammatic style known as the block diagram. In it the transfer function, also known as the system function or network function, is a mathematical model of the relation between the input and output based on the differential equations describing the system. Control theory dates from the 19th century, when the theoretical basis for the operation of governors was first described by James Clerk Maxwell. Control theory was further advanced by Edward Routh in 1874, Charles Sturm and in 1895, Adolf Hurwitz, who all contributed to the establishment of control stability criteria; and from 1922 onwards, the development of PID control theory by Nicolas Minorsky.
Although a major application of mathematical control theory is in control systems engineering, which deals with the design of process control systems for industry, other applications range far beyond this. As the general theory of feedback systems, control theory is useful wherever feedback occurs - thus control theory also has applications in life sciences, computer engineering, sociology and operations research. | 2001-11-07T20:08:09Z | 2023-12-30T16:06:52Z | [
"Template:Colbegin",
"Template:Reflist",
"Template:Areas of mathematics",
"Template:Authority control",
"Template:Use mdy dates",
"Template:Main",
"Template:Details",
"Template:Wikibooks",
"Template:Cybernetics",
"Template:Colend",
"Template:Cite book",
"Template:Cite magazine",
"Template:Portal",
"Template:Cite web",
"Template:Commons category",
"Template:Control theory",
"Template:Merge from",
"Template:See also",
"Template:Citation needed",
"Template:Cite journal",
"Template:Systems",
"Template:Short description",
"Template:About",
"Template:Excerpt"
] | https://en.wikipedia.org/wiki/Control_theory |
7,042 | Joint cracking | Joint cracking is the manipulation of joints to produce a sound and related "popping" sensation. It is sometimes performed by physical therapists, chiropractors, osteopaths, and masseurs in Turkish baths pursuing a variety of outcomes.
The cracking of joints, especially knuckles, was long believed to lead to arthritis and other joint problems. However, this is not supported by medical research.
The cracking mechanism and the resulting sound is caused by dissolved gas (nitrogen gas) cavitation bubbles suddenly collapsing inside the joints. This happens when the joint cavity is stretched beyond its normal size. The pressure inside the joint cavity drops and the dissolved gas suddenly comes out of solution and takes gaseous form which makes a distinct popping noise. To be able to crack the same knuckle again requires waiting about 20 minutes before the bubbles dissolve back into the synovial fluid and will be able to form again.
It is possible for voluntary joint cracking by an individual to be considered as part of the obsessive–compulsive disorders spectrum.
For many decades, the physical mechanism that causes the cracking sound as a result of bending, twisting, or compressing joints was uncertain. Suggested causes included:
There were several hypotheses to explain the cracking of joints. Synovial fluid cavitation has some evidence to support it. When a spinal manipulation is performed, the applied force separates the articular surfaces of a fully encapsulated synovial joint, which in turn creates a reduction in pressure within the joint cavity. In this low-pressure environment, some of the gases that are dissolved in the synovial fluid (which are naturally found in all bodily fluids) leave the solution, making a bubble, or cavity (tribonucleation), which rapidly collapses upon itself, resulting in a "clicking" sound. The contents of the resultant gas bubble are thought to be mainly carbon dioxide, oxygen and nitrogen. The effects of this process will remain for a period of time known as the "refractory period", during which the joint cannot be "re-cracked", which lasts about 20 minutes, while the gases are slowly reabsorbed into the synovial fluid. There is some evidence that ligament laxity may be associated with an increased tendency to cavitate.
In 2015, research showed that bubbles remained in the fluid after cracking, suggesting that the cracking sound was produced when the bubble within the joint was formed, not when it collapsed. In 2018, a team in France created a mathematical simulation of what happens in a joint just before it cracks. The team concluded that the sound is caused by bubbles' collapse, and bubbles observed in the fluid are the result of a partial collapse. Due to the theoretical basis and lack of physical experimentation, the scientific community is still not fully convinced of this conclusion.
The snapping of tendons or scar tissue over a prominence (as in snapping hip syndrome) can also generate a loud snapping or popping sound.
The common claim that cracking one's knuckles causes arthritis is not supported by scientific evidence. A study published in 2011 examined the hand radiographs of 215 people (aged 50 to 89). It compared the joints of those who regularly cracked their knuckles to those who did not. The study concluded that knuckle-cracking did not cause hand osteoarthritis, no matter how many years or how often a person cracked their knuckles. This early study has been criticized for not taking into consideration the possibility of confounding factors, such as whether the ability to crack one's knuckles is associated with impaired hand functioning rather than being a cause of it.
The medical doctor Donald Unger cracked the knuckles of his left hand every day for more than sixty years, but he did not crack the knuckles of his right hand. No arthritis or other ailments formed in either hand, and for this, he was awarded 2009's satirical Ig Nobel Prize in Medicine. | [
{
"paragraph_id": 0,
"text": "Joint cracking is the manipulation of joints to produce a sound and related \"popping\" sensation. It is sometimes performed by physical therapists, chiropractors, osteopaths, and masseurs in Turkish baths pursuing a variety of outcomes.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The cracking of joints, especially knuckles, was long believed to lead to arthritis and other joint problems. However, this is not supported by medical research.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The cracking mechanism and the resulting sound is caused by dissolved gas (nitrogen gas) cavitation bubbles suddenly collapsing inside the joints. This happens when the joint cavity is stretched beyond its normal size. The pressure inside the joint cavity drops and the dissolved gas suddenly comes out of solution and takes gaseous form which makes a distinct popping noise. To be able to crack the same knuckle again requires waiting about 20 minutes before the bubbles dissolve back into the synovial fluid and will be able to form again.",
"title": ""
},
{
"paragraph_id": 3,
"text": "It is possible for voluntary joint cracking by an individual to be considered as part of the obsessive–compulsive disorders spectrum.",
"title": ""
},
{
"paragraph_id": 4,
"text": "For many decades, the physical mechanism that causes the cracking sound as a result of bending, twisting, or compressing joints was uncertain. Suggested causes included:",
"title": "Causes"
},
{
"paragraph_id": 5,
"text": "There were several hypotheses to explain the cracking of joints. Synovial fluid cavitation has some evidence to support it. When a spinal manipulation is performed, the applied force separates the articular surfaces of a fully encapsulated synovial joint, which in turn creates a reduction in pressure within the joint cavity. In this low-pressure environment, some of the gases that are dissolved in the synovial fluid (which are naturally found in all bodily fluids) leave the solution, making a bubble, or cavity (tribonucleation), which rapidly collapses upon itself, resulting in a \"clicking\" sound. The contents of the resultant gas bubble are thought to be mainly carbon dioxide, oxygen and nitrogen. The effects of this process will remain for a period of time known as the \"refractory period\", during which the joint cannot be \"re-cracked\", which lasts about 20 minutes, while the gases are slowly reabsorbed into the synovial fluid. There is some evidence that ligament laxity may be associated with an increased tendency to cavitate.",
"title": "Causes"
},
{
"paragraph_id": 6,
"text": "In 2015, research showed that bubbles remained in the fluid after cracking, suggesting that the cracking sound was produced when the bubble within the joint was formed, not when it collapsed. In 2018, a team in France created a mathematical simulation of what happens in a joint just before it cracks. The team concluded that the sound is caused by bubbles' collapse, and bubbles observed in the fluid are the result of a partial collapse. Due to the theoretical basis and lack of physical experimentation, the scientific community is still not fully convinced of this conclusion.",
"title": "Causes"
},
{
"paragraph_id": 7,
"text": "The snapping of tendons or scar tissue over a prominence (as in snapping hip syndrome) can also generate a loud snapping or popping sound.",
"title": "Causes"
},
{
"paragraph_id": 8,
"text": "The common claim that cracking one's knuckles causes arthritis is not supported by scientific evidence. A study published in 2011 examined the hand radiographs of 215 people (aged 50 to 89). It compared the joints of those who regularly cracked their knuckles to those who did not. The study concluded that knuckle-cracking did not cause hand osteoarthritis, no matter how many years or how often a person cracked their knuckles. This early study has been criticized for not taking into consideration the possibility of confounding factors, such as whether the ability to crack one's knuckles is associated with impaired hand functioning rather than being a cause of it.",
"title": "Relation to arthritis"
},
{
"paragraph_id": 9,
"text": "The medical doctor Donald Unger cracked the knuckles of his left hand every day for more than sixty years, but he did not crack the knuckles of his right hand. No arthritis or other ailments formed in either hand, and for this, he was awarded 2009's satirical Ig Nobel Prize in Medicine.",
"title": "Relation to arthritis"
}
] | Joint cracking is the manipulation of joints to produce a sound and related "popping" sensation. It is sometimes performed by physical therapists, chiropractors, osteopaths, and masseurs in Turkish baths pursuing a variety of outcomes. The cracking of joints, especially knuckles, was long believed to lead to arthritis and other joint problems. However, this is not supported by medical research. The cracking mechanism and the resulting sound is caused by dissolved gas cavitation bubbles suddenly collapsing inside the joints. This happens when the joint cavity is stretched beyond its normal size. The pressure inside the joint cavity drops and the dissolved gas suddenly comes out of solution and takes gaseous form which makes a distinct popping noise. To be able to crack the same knuckle again requires waiting about 20 minutes before the bubbles dissolve back into the synovial fluid and will be able to form again. It is possible for voluntary joint cracking by an individual to be considered as part of the obsessive–compulsive disorders spectrum. | 2023-06-25T18:21:42Z | [
"Template:Cite web",
"Template:Cite news",
"Template:Cite journal",
"Template:Cite magazine",
"Template:Short description",
"Template:Multiple image",
"Template:Reflist",
"Template:Isbn",
"Template:Use dmy dates"
] | https://en.wikipedia.org/wiki/Joint_cracking |
|
7,043 | Chemical formula | In chemistry, a chemical formula is a way of presenting information about the chemical proportions of atoms that constitute a particular chemical compound or molecule, using chemical element symbols, numbers, and sometimes also other symbols, such as parentheses, dashes, brackets, commas and plus (+) and minus (−) signs. These are limited to a single typographic line of symbols, which may include subscripts and superscripts. A chemical formula is not a chemical name since it does not contain any words. Although a chemical formula may imply certain simple chemical structures, it is not the same as a full chemical structural formula. Chemical formulae can fully specify the structure of only the simplest of molecules and chemical substances, and are generally more limited in power than chemical names and structural formulae.
The simplest types of chemical formulae are called empirical formulae, which use letters and numbers indicating the numerical proportions of atoms of each type. Molecular formulae indicate the simple numbers of each type of atom in a molecule, with no information on structure. For example, the empirical formula for glucose is CH2O (twice as many hydrogen atoms as carbon and oxygen), while its molecular formula is C6H12O6 (12 hydrogen atoms, six carbon and oxygen atoms).
Sometimes a chemical formula is complicated by being written as a condensed formula (or condensed molecular formula, occasionally called a "semi-structural formula"), which conveys additional information about the particular ways in which the atoms are chemically bonded together, either in covalent bonds, ionic bonds, or various combinations of these types. This is possible if the relevant bonding is easy to show in one dimension. An example is the condensed molecular/chemical formula for ethanol, which is CH3−CH2−OH or CH3CH2OH. However, even a condensed chemical formula is necessarily limited in its ability to show complex bonding relationships between atoms, especially atoms that have bonds to four or more different substituents.
Since a chemical formula must be expressed as a single line of chemical element symbols, it often cannot be as informative as a true structural formula, which is a graphical representation of the spatial relationship between atoms in chemical compounds (see for example the figure for butane structural and chemical formulae, at right). For reasons of structural complexity, a single condensed chemical formula (or semi-structural formula) may correspond to different molecules, known as isomers. For example, glucose shares its molecular formula C6H12O6 with a number of other sugars, including fructose, galactose and mannose. Linear equivalent chemical names exist that can and do specify uniquely any complex structural formula (see chemical nomenclature), but such names must use many terms (words), rather than the simple element symbols, numbers, and simple typographical symbols that define a chemical formula.
Chemical formulae may be used in chemical equations to describe chemical reactions and other chemical transformations, such as the dissolving of ionic compounds into solution. While, as noted, chemical formulae do not have the full power of structural formulae to show chemical relationships between atoms, they are sufficient to keep track of numbers of atoms and numbers of electrical charges in chemical reactions, thus balancing chemical equations so that these equations can be used in chemical problems involving conservation of atoms, and conservation of electric charge.
A chemical formula identifies each constituent element by its chemical symbol and indicates the proportionate number of atoms of each element. In empirical formulae, these proportions begin with a key element and then assign numbers of atoms of the other elements in the compound, by ratios to the key element. For molecular compounds, these ratio numbers can all be expressed as whole numbers. For example, the empirical formula of ethanol may be written C2H6O because the molecules of ethanol all contain two carbon atoms, six hydrogen atoms, and one oxygen atom. Some types of ionic compounds, however, cannot be written with entirely whole-number empirical formulae. An example is boron carbide, whose formula of CBn is a variable non-whole number ratio with n ranging from over 4 to more than 6.5.
When the chemical compound of the formula consists of simple molecules, chemical formulae often employ ways to suggest the structure of the molecule. These types of formulae are variously known as molecular formulae and condensed formulae. A molecular formula enumerates the number of atoms to reflect those in the molecule, so that the molecular formula for glucose is C6H12O6 rather than the glucose empirical formula, which is CH2O. However, except for very simple substances, molecular chemical formulae lack needed structural information, and are ambiguous.
For simple molecules, a condensed (or semi-structural) formula is a type of chemical formula that may fully imply a correct structural formula. For example, ethanol may be represented by the condensed chemical formula CH3CH2OH, and dimethyl ether by the condensed formula CH3OCH3. These two molecules have the same empirical and molecular formulae (C2H6O), but may be differentiated by the condensed formulae shown, which are sufficient to represent the full structure of these simple organic compounds.
Condensed chemical formulae may also be used to represent ionic compounds that do not exist as discrete molecules, but nonetheless do contain covalently bound clusters within them. These polyatomic ions are groups of atoms that are covalently bound together and have an overall ionic charge, such as the sulfate [SO4] ion. Each polyatomic ion in a compound is written individually in order to illustrate the separate groupings. For example, the compound dichlorine hexoxide has an empirical formula ClO3, and molecular formula Cl2O6, but in liquid or solid forms, this compound is more correctly shown by an ionic condensed formula [ClO2][ClO4], which illustrates that this compound consists of [ClO2] ions and [ClO4] ions. In such cases, the condensed formula only need be complex enough to show at least one of each ionic species.
Chemical formulae as described here are distinct from the far more complex chemical systematic names that are used in various systems of chemical nomenclature. For example, one systematic name for glucose is (2R,3S,4R,5R)-2,3,4,5,6-pentahydroxyhexanal. This name, interpreted by the rules behind it, fully specifies glucose's structural formula, but the name is not a chemical formula as usually understood, and uses terms and words not used in chemical formulae. Such names, unlike basic formulae, may be able to represent full structural formulae without graphs.
In chemistry, the empirical formula of a chemical is a simple expression of the relative number of each type of atom or ratio of the elements in the compound. Empirical formulae are the standard for ionic compounds, such as CaCl2, and for macromolecules, such as SiO2. An empirical formula makes no reference to isomerism, structure, or absolute number of atoms. The term empirical refers to the process of elemental analysis, a technique of analytical chemistry used to determine the relative percent composition of a pure chemical substance by element.
For example, hexane has a molecular formula of C6H14, and (for one of its isomers, n-hexane) a structural formula CH3CH2CH2CH2CH2CH3, implying that it has a chain structure of 6 carbon atoms, and 14 hydrogen atoms. However, the empirical formula for hexane is C3H7. Likewise the empirical formula for hydrogen peroxide, H2O2, is simply HO, expressing the 1:1 ratio of component elements. Formaldehyde and acetic acid have the same empirical formula, CH2O. This is the actual chemical formula for formaldehyde, but acetic acid has double the number of atoms.
Molecular formulae simply indicate the numbers of each type of atom in a molecule of a molecular substance. They are the same as empirical formulae for molecules that only have one atom of a particular type, but otherwise may have larger numbers. An example of the difference is the empirical formula for glucose, which is CH2O (ratio 1:2:1), while its molecular formula is C6H12O6 (number of atoms 6:12:6). For water, both formulae are H2O. A molecular formula provides more information about a molecule than its empirical formula, but is more difficult to establish.
A molecular formula shows the number of elements in a molecule, and determines whether it is a binary compound, ternary compound, quaternary compound, or has even more elements.
In addition to indicating the number of atoms of each elementa molecule, a structural formula indicates how the atoms are organized, and shows (or implies) the chemical bonds between the atoms. There are multiple types of structural formulas focused on different aspects of the molecular structure.
The two diagrams show two molecules which are structural isomers of each other, since they both have the same molecular formula C4H10, but they have different structural formulas as shown.
The connectivity of a molecule often has a strong influence on its physical and chemical properties and behavior. Two molecules composed of the same numbers of the same types of atoms (i.e. a pair of isomers) might have completely different chemical and/or physical properties if the atoms are connected differently or in different positions. In such cases, a structural formula is useful, as it illustrates which atoms are bonded to which other ones. From the connectivity, it is often possible to deduce the approximate shape of the molecule.
A condensed (or semi-structural) formula may represent the types and spatial arrangement of bonds in a simple chemical substance, though it does not necessarily specify isomers or complex structures. For example, ethane consists of two carbon atoms single-bonded to each other, with each carbon atom having three hydrogen atoms bonded to it. Its chemical formula can be rendered as CH3CH3. In ethylene there is a double bond between the carbon atoms (and thus each carbon only has two hydrogens), therefore the chemical formula may be written: CH2CH2, and the fact that there is a double bond between the carbons is implicit because carbon has a valence of four. However, a more explicit method is to write H2C=CH2 or less commonly H2C::CH2. The two lines (or two pairs of dots) indicate that a double bond connects the atoms on either side of them.
A triple bond may be expressed with three lines (HC≡CH) or three pairs of dots (HC:::CH), and if there may be ambiguity, a single line or pair of dots may be used to indicate a single bond.
Molecules with multiple functional groups that are the same may be expressed by enclosing the repeated group in round brackets. For example, isobutane may be written (CH3)3CH. This condensed structural formula implies a different connectivity from other molecules that can be formed using the same atoms in the same proportions (isomers). The formula (CH3)3CH implies a central carbon atom connected to one hydrogen atom and three methyl groups (CH3). The same number of atoms of each element (10 hydrogens and 4 carbons, or C4H10) may be used to make a straight chain molecule, n-butane: CH3CH2CH2CH3.
In any given chemical compound, the elements always combine in the same proportion with each other. This is the law of constant composition.
The law of constant composition says that, in any particular chemical compound, all samples of that compound will be made up of the same elements in the same proportion or ratio. For example, any water molecule is always made up of two hydrogen atoms and one oxygen atom in a 2:1 ratio. If we look at the relative masses of oxygen and hydrogen in a water molecule, we see that 94% of the mass of a water molecule is accounted for by oxygen and the remaining 6% is the mass of hydrogen. This mass proportion will be the same for any water molecule.
The alkene called but-2-ene has two isomers, which the chemical formula CH3CH=CHCH3 does not identify. The relative position of the two methyl groups must be indicated by additional notation denoting whether the methyl groups are on the same side of the double bond (cis or Z) or on the opposite sides from each other (trans or E).
As noted above, in order to represent the full structural formulae of many complex organic and inorganic compounds, chemical nomenclature may be needed which goes well beyond the available resources used above in simple condensed formulae. See IUPAC nomenclature of organic chemistry and IUPAC nomenclature of inorganic chemistry 2005 for examples. In addition, linear naming systems such as International Chemical Identifier (InChI) allow a computer to construct a structural formula, and simplified molecular-input line-entry system (SMILES) allows a more human-readable ASCII input. However, all these nomenclature systems go beyond the standards of chemical formulae, and technically are chemical naming systems, not formula systems.
For polymers in condensed chemical formulae, parentheses are placed around the repeating unit. For example, a hydrocarbon molecule that is described as CH3(CH2)50CH3, is a molecule with fifty repeating units. If the number of repeating units is unknown or variable, the letter n may be used to indicate this formula: CH3(CH2)nCH3.
For ions, the charge on a particular atom may be denoted with a right-hand superscript. For example, Na, or Cu. The total charge on a charged molecule or a polyatomic ion may also be shown in this way, such as for hydronium, H3O, or sulfate, SO2−4. Here + and - are used in place of +1 and -1, respectively.
For more complex ions, brackets [ ] are often used to enclose the ionic formula, as in [B12H12], which is found in compounds such as caesium dodecaborate, Cs2[B12H12]. Parentheses ( ) can be nested inside brackets to indicate a repeating unit, as in Hexamminecobalt(III) chloride, [Co(NH3)6]Cl−3. Here, (NH3)6 indicates that the ion contains six ammine groups (NH3) bonded to cobalt, and [ ] encloses the entire formula of the ion with charge +3.
This is strictly optional; a chemical formula is valid with or without ionization information, and Hexamminecobalt(III) chloride may be written as [Co(NH3)6]Cl−3 or [Co(NH3)6]Cl3. Brackets, like parentheses, behave in chemistry as they do in mathematics, grouping terms together – they are not specifically employed only for ionization states. In the latter case here, the parentheses indicate 6 groups all of the same shape, bonded to another group of size 1 (the cobalt atom), and then the entire bundle, as a group, is bonded to 3 chlorine atoms. In the former case, it is clearer that the bond connecting the chlorines is ionic, rather than covalent.
Although isotopes are more relevant to nuclear chemistry or stable isotope chemistry than to conventional chemistry, different isotopes may be indicated with a prefixed superscript in a chemical formula. For example, the phosphate ion containing radioactive phosphorus-32 is [PO4]. Also a study involving stable isotope ratios might include the molecule OO.
A left-hand subscript is sometimes used redundantly to indicate the atomic number. For example, 8O2 for dioxygen, and 8O2 for the most abundant isotopic species of dioxygen. This is convenient when writing equations for nuclear reactions, in order to show the balance of charge more clearly.
The @ symbol (at sign) indicates an atom or molecule trapped inside a cage but not chemically bound to it. For example, a buckminsterfullerene (C60) with an atom (M) would simply be represented as MC60 regardless of whether M was inside the fullerene without chemical bonding or outside, bound to one of the carbon atoms. Using the @ symbol, this would be denoted M@C60 if M was inside the carbon network. A non-fullerene example is [As@Ni12As20], an ion in which one arsenic (As) atom is trapped in a cage formed by the other 32 atoms.
This notation was proposed in 1991 with the discovery of fullerene cages (endohedral fullerenes), which can trap atoms such as La to form, for example, La@C60 or La@C82. The choice of the symbol has been explained by the authors as being concise, readily printed and transmitted electronically (the at sign is included in ASCII, which most modern character encoding schemes are based on), and the visual aspects suggesting the structure of an endohedral fullerene.
Chemical formulae most often use integers for each element. However, there is a class of compounds, called non-stoichiometric compounds, that cannot be represented by small integers. Such a formula might be written using decimal fractions, as in Fe0.95O, or it might include a variable part represented by a letter, as in Fe1−xO, where x is normally much less than 1.
A chemical formula used for a series of compounds that differ from each other by a constant unit is called a general formula. It generates a homologous series of chemical formulae. For example, alcohols may be represented by the formula CnH2n + 1OH (n ≥ 1), giving the homologs methanol, ethanol, propanol for 1 ≤ n ≤ 3.
The Hill system (or Hill notation) is a system of writing empirical chemical formulae, molecular chemical formulae and components of a condensed formula such that the number of carbon atoms in a molecule is indicated first, the number of hydrogen atoms next, and then the number of all other chemical elements subsequently, in alphabetical order of the chemical symbols. When the formula contains no carbon, all the elements, including hydrogen, are listed alphabetically.
By sorting formulae according to the number of atoms of each element present in the formula according to these rules, with differences in earlier elements or numbers being treated as more significant than differences in any later element or number—like sorting text strings into lexicographical order—it is possible to collate chemical formulae into what is known as Hill system order.
The Hill system was first published by Edwin A. Hill of the United States Patent and Trademark Office in 1900. It is the most commonly used system in chemical databases and printed indexes to sort lists of compounds.
A list of formulae in Hill system order is arranged alphabetically, as above, with single-letter elements coming before two-letter symbols when the symbols begin with the same letter (so "B" comes before "Be", which comes before "Br").
The following example formulae are written using the Hill system, and listed in Hill order: | [
{
"paragraph_id": 0,
"text": "In chemistry, a chemical formula is a way of presenting information about the chemical proportions of atoms that constitute a particular chemical compound or molecule, using chemical element symbols, numbers, and sometimes also other symbols, such as parentheses, dashes, brackets, commas and plus (+) and minus (−) signs. These are limited to a single typographic line of symbols, which may include subscripts and superscripts. A chemical formula is not a chemical name since it does not contain any words. Although a chemical formula may imply certain simple chemical structures, it is not the same as a full chemical structural formula. Chemical formulae can fully specify the structure of only the simplest of molecules and chemical substances, and are generally more limited in power than chemical names and structural formulae.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The simplest types of chemical formulae are called empirical formulae, which use letters and numbers indicating the numerical proportions of atoms of each type. Molecular formulae indicate the simple numbers of each type of atom in a molecule, with no information on structure. For example, the empirical formula for glucose is CH2O (twice as many hydrogen atoms as carbon and oxygen), while its molecular formula is C6H12O6 (12 hydrogen atoms, six carbon and oxygen atoms).",
"title": ""
},
{
"paragraph_id": 2,
"text": "Sometimes a chemical formula is complicated by being written as a condensed formula (or condensed molecular formula, occasionally called a \"semi-structural formula\"), which conveys additional information about the particular ways in which the atoms are chemically bonded together, either in covalent bonds, ionic bonds, or various combinations of these types. This is possible if the relevant bonding is easy to show in one dimension. An example is the condensed molecular/chemical formula for ethanol, which is CH3−CH2−OH or CH3CH2OH. However, even a condensed chemical formula is necessarily limited in its ability to show complex bonding relationships between atoms, especially atoms that have bonds to four or more different substituents.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Since a chemical formula must be expressed as a single line of chemical element symbols, it often cannot be as informative as a true structural formula, which is a graphical representation of the spatial relationship between atoms in chemical compounds (see for example the figure for butane structural and chemical formulae, at right). For reasons of structural complexity, a single condensed chemical formula (or semi-structural formula) may correspond to different molecules, known as isomers. For example, glucose shares its molecular formula C6H12O6 with a number of other sugars, including fructose, galactose and mannose. Linear equivalent chemical names exist that can and do specify uniquely any complex structural formula (see chemical nomenclature), but such names must use many terms (words), rather than the simple element symbols, numbers, and simple typographical symbols that define a chemical formula.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Chemical formulae may be used in chemical equations to describe chemical reactions and other chemical transformations, such as the dissolving of ionic compounds into solution. While, as noted, chemical formulae do not have the full power of structural formulae to show chemical relationships between atoms, they are sufficient to keep track of numbers of atoms and numbers of electrical charges in chemical reactions, thus balancing chemical equations so that these equations can be used in chemical problems involving conservation of atoms, and conservation of electric charge.",
"title": ""
},
{
"paragraph_id": 5,
"text": "A chemical formula identifies each constituent element by its chemical symbol and indicates the proportionate number of atoms of each element. In empirical formulae, these proportions begin with a key element and then assign numbers of atoms of the other elements in the compound, by ratios to the key element. For molecular compounds, these ratio numbers can all be expressed as whole numbers. For example, the empirical formula of ethanol may be written C2H6O because the molecules of ethanol all contain two carbon atoms, six hydrogen atoms, and one oxygen atom. Some types of ionic compounds, however, cannot be written with entirely whole-number empirical formulae. An example is boron carbide, whose formula of CBn is a variable non-whole number ratio with n ranging from over 4 to more than 6.5.",
"title": "Overview"
},
{
"paragraph_id": 6,
"text": "When the chemical compound of the formula consists of simple molecules, chemical formulae often employ ways to suggest the structure of the molecule. These types of formulae are variously known as molecular formulae and condensed formulae. A molecular formula enumerates the number of atoms to reflect those in the molecule, so that the molecular formula for glucose is C6H12O6 rather than the glucose empirical formula, which is CH2O. However, except for very simple substances, molecular chemical formulae lack needed structural information, and are ambiguous.",
"title": "Overview"
},
{
"paragraph_id": 7,
"text": "For simple molecules, a condensed (or semi-structural) formula is a type of chemical formula that may fully imply a correct structural formula. For example, ethanol may be represented by the condensed chemical formula CH3CH2OH, and dimethyl ether by the condensed formula CH3OCH3. These two molecules have the same empirical and molecular formulae (C2H6O), but may be differentiated by the condensed formulae shown, which are sufficient to represent the full structure of these simple organic compounds.",
"title": "Overview"
},
{
"paragraph_id": 8,
"text": "Condensed chemical formulae may also be used to represent ionic compounds that do not exist as discrete molecules, but nonetheless do contain covalently bound clusters within them. These polyatomic ions are groups of atoms that are covalently bound together and have an overall ionic charge, such as the sulfate [SO4] ion. Each polyatomic ion in a compound is written individually in order to illustrate the separate groupings. For example, the compound dichlorine hexoxide has an empirical formula ClO3, and molecular formula Cl2O6, but in liquid or solid forms, this compound is more correctly shown by an ionic condensed formula [ClO2][ClO4], which illustrates that this compound consists of [ClO2] ions and [ClO4] ions. In such cases, the condensed formula only need be complex enough to show at least one of each ionic species.",
"title": "Overview"
},
{
"paragraph_id": 9,
"text": "Chemical formulae as described here are distinct from the far more complex chemical systematic names that are used in various systems of chemical nomenclature. For example, one systematic name for glucose is (2R,3S,4R,5R)-2,3,4,5,6-pentahydroxyhexanal. This name, interpreted by the rules behind it, fully specifies glucose's structural formula, but the name is not a chemical formula as usually understood, and uses terms and words not used in chemical formulae. Such names, unlike basic formulae, may be able to represent full structural formulae without graphs.",
"title": "Overview"
},
{
"paragraph_id": 10,
"text": "",
"title": "Types"
},
{
"paragraph_id": 11,
"text": "In chemistry, the empirical formula of a chemical is a simple expression of the relative number of each type of atom or ratio of the elements in the compound. Empirical formulae are the standard for ionic compounds, such as CaCl2, and for macromolecules, such as SiO2. An empirical formula makes no reference to isomerism, structure, or absolute number of atoms. The term empirical refers to the process of elemental analysis, a technique of analytical chemistry used to determine the relative percent composition of a pure chemical substance by element.",
"title": "Types"
},
{
"paragraph_id": 12,
"text": "For example, hexane has a molecular formula of C6H14, and (for one of its isomers, n-hexane) a structural formula CH3CH2CH2CH2CH2CH3, implying that it has a chain structure of 6 carbon atoms, and 14 hydrogen atoms. However, the empirical formula for hexane is C3H7. Likewise the empirical formula for hydrogen peroxide, H2O2, is simply HO, expressing the 1:1 ratio of component elements. Formaldehyde and acetic acid have the same empirical formula, CH2O. This is the actual chemical formula for formaldehyde, but acetic acid has double the number of atoms.",
"title": "Types"
},
{
"paragraph_id": 13,
"text": "Molecular formulae simply indicate the numbers of each type of atom in a molecule of a molecular substance. They are the same as empirical formulae for molecules that only have one atom of a particular type, but otherwise may have larger numbers. An example of the difference is the empirical formula for glucose, which is CH2O (ratio 1:2:1), while its molecular formula is C6H12O6 (number of atoms 6:12:6). For water, both formulae are H2O. A molecular formula provides more information about a molecule than its empirical formula, but is more difficult to establish.",
"title": "Types"
},
{
"paragraph_id": 14,
"text": "A molecular formula shows the number of elements in a molecule, and determines whether it is a binary compound, ternary compound, quaternary compound, or has even more elements.",
"title": "Types"
},
{
"paragraph_id": 15,
"text": "In addition to indicating the number of atoms of each elementa molecule, a structural formula indicates how the atoms are organized, and shows (or implies) the chemical bonds between the atoms. There are multiple types of structural formulas focused on different aspects of the molecular structure.",
"title": "Types"
},
{
"paragraph_id": 16,
"text": "The two diagrams show two molecules which are structural isomers of each other, since they both have the same molecular formula C4H10, but they have different structural formulas as shown.",
"title": "Types"
},
{
"paragraph_id": 17,
"text": "The connectivity of a molecule often has a strong influence on its physical and chemical properties and behavior. Two molecules composed of the same numbers of the same types of atoms (i.e. a pair of isomers) might have completely different chemical and/or physical properties if the atoms are connected differently or in different positions. In such cases, a structural formula is useful, as it illustrates which atoms are bonded to which other ones. From the connectivity, it is often possible to deduce the approximate shape of the molecule.",
"title": "Types"
},
{
"paragraph_id": 18,
"text": "A condensed (or semi-structural) formula may represent the types and spatial arrangement of bonds in a simple chemical substance, though it does not necessarily specify isomers or complex structures. For example, ethane consists of two carbon atoms single-bonded to each other, with each carbon atom having three hydrogen atoms bonded to it. Its chemical formula can be rendered as CH3CH3. In ethylene there is a double bond between the carbon atoms (and thus each carbon only has two hydrogens), therefore the chemical formula may be written: CH2CH2, and the fact that there is a double bond between the carbons is implicit because carbon has a valence of four. However, a more explicit method is to write H2C=CH2 or less commonly H2C::CH2. The two lines (or two pairs of dots) indicate that a double bond connects the atoms on either side of them.",
"title": "Types"
},
{
"paragraph_id": 19,
"text": "A triple bond may be expressed with three lines (HC≡CH) or three pairs of dots (HC:::CH), and if there may be ambiguity, a single line or pair of dots may be used to indicate a single bond.",
"title": "Types"
},
{
"paragraph_id": 20,
"text": "Molecules with multiple functional groups that are the same may be expressed by enclosing the repeated group in round brackets. For example, isobutane may be written (CH3)3CH. This condensed structural formula implies a different connectivity from other molecules that can be formed using the same atoms in the same proportions (isomers). The formula (CH3)3CH implies a central carbon atom connected to one hydrogen atom and three methyl groups (CH3). The same number of atoms of each element (10 hydrogens and 4 carbons, or C4H10) may be used to make a straight chain molecule, n-butane: CH3CH2CH2CH3.",
"title": "Types"
},
{
"paragraph_id": 21,
"text": "In any given chemical compound, the elements always combine in the same proportion with each other. This is the law of constant composition.",
"title": "Law of composition"
},
{
"paragraph_id": 22,
"text": "The law of constant composition says that, in any particular chemical compound, all samples of that compound will be made up of the same elements in the same proportion or ratio. For example, any water molecule is always made up of two hydrogen atoms and one oxygen atom in a 2:1 ratio. If we look at the relative masses of oxygen and hydrogen in a water molecule, we see that 94% of the mass of a water molecule is accounted for by oxygen and the remaining 6% is the mass of hydrogen. This mass proportion will be the same for any water molecule.",
"title": "Law of composition"
},
{
"paragraph_id": 23,
"text": "The alkene called but-2-ene has two isomers, which the chemical formula CH3CH=CHCH3 does not identify. The relative position of the two methyl groups must be indicated by additional notation denoting whether the methyl groups are on the same side of the double bond (cis or Z) or on the opposite sides from each other (trans or E).",
"title": "Law of composition"
},
{
"paragraph_id": 24,
"text": "As noted above, in order to represent the full structural formulae of many complex organic and inorganic compounds, chemical nomenclature may be needed which goes well beyond the available resources used above in simple condensed formulae. See IUPAC nomenclature of organic chemistry and IUPAC nomenclature of inorganic chemistry 2005 for examples. In addition, linear naming systems such as International Chemical Identifier (InChI) allow a computer to construct a structural formula, and simplified molecular-input line-entry system (SMILES) allows a more human-readable ASCII input. However, all these nomenclature systems go beyond the standards of chemical formulae, and technically are chemical naming systems, not formula systems.",
"title": "Law of composition"
},
{
"paragraph_id": 25,
"text": "For polymers in condensed chemical formulae, parentheses are placed around the repeating unit. For example, a hydrocarbon molecule that is described as CH3(CH2)50CH3, is a molecule with fifty repeating units. If the number of repeating units is unknown or variable, the letter n may be used to indicate this formula: CH3(CH2)nCH3.",
"title": "Law of composition"
},
{
"paragraph_id": 26,
"text": "For ions, the charge on a particular atom may be denoted with a right-hand superscript. For example, Na, or Cu. The total charge on a charged molecule or a polyatomic ion may also be shown in this way, such as for hydronium, H3O, or sulfate, SO2−4. Here + and - are used in place of +1 and -1, respectively.",
"title": "Law of composition"
},
{
"paragraph_id": 27,
"text": "For more complex ions, brackets [ ] are often used to enclose the ionic formula, as in [B12H12], which is found in compounds such as caesium dodecaborate, Cs2[B12H12]. Parentheses ( ) can be nested inside brackets to indicate a repeating unit, as in Hexamminecobalt(III) chloride, [Co(NH3)6]Cl−3. Here, (NH3)6 indicates that the ion contains six ammine groups (NH3) bonded to cobalt, and [ ] encloses the entire formula of the ion with charge +3.",
"title": "Law of composition"
},
{
"paragraph_id": 28,
"text": "This is strictly optional; a chemical formula is valid with or without ionization information, and Hexamminecobalt(III) chloride may be written as [Co(NH3)6]Cl−3 or [Co(NH3)6]Cl3. Brackets, like parentheses, behave in chemistry as they do in mathematics, grouping terms together – they are not specifically employed only for ionization states. In the latter case here, the parentheses indicate 6 groups all of the same shape, bonded to another group of size 1 (the cobalt atom), and then the entire bundle, as a group, is bonded to 3 chlorine atoms. In the former case, it is clearer that the bond connecting the chlorines is ionic, rather than covalent.",
"title": "Law of composition"
},
{
"paragraph_id": 29,
"text": "Although isotopes are more relevant to nuclear chemistry or stable isotope chemistry than to conventional chemistry, different isotopes may be indicated with a prefixed superscript in a chemical formula. For example, the phosphate ion containing radioactive phosphorus-32 is [PO4]. Also a study involving stable isotope ratios might include the molecule OO.",
"title": "Isotopes"
},
{
"paragraph_id": 30,
"text": "A left-hand subscript is sometimes used redundantly to indicate the atomic number. For example, 8O2 for dioxygen, and 8O2 for the most abundant isotopic species of dioxygen. This is convenient when writing equations for nuclear reactions, in order to show the balance of charge more clearly.",
"title": "Isotopes"
},
{
"paragraph_id": 31,
"text": "The @ symbol (at sign) indicates an atom or molecule trapped inside a cage but not chemically bound to it. For example, a buckminsterfullerene (C60) with an atom (M) would simply be represented as MC60 regardless of whether M was inside the fullerene without chemical bonding or outside, bound to one of the carbon atoms. Using the @ symbol, this would be denoted M@C60 if M was inside the carbon network. A non-fullerene example is [As@Ni12As20], an ion in which one arsenic (As) atom is trapped in a cage formed by the other 32 atoms.",
"title": "Trapped atoms"
},
{
"paragraph_id": 32,
"text": "This notation was proposed in 1991 with the discovery of fullerene cages (endohedral fullerenes), which can trap atoms such as La to form, for example, La@C60 or La@C82. The choice of the symbol has been explained by the authors as being concise, readily printed and transmitted electronically (the at sign is included in ASCII, which most modern character encoding schemes are based on), and the visual aspects suggesting the structure of an endohedral fullerene.",
"title": "Trapped atoms"
},
{
"paragraph_id": 33,
"text": "Chemical formulae most often use integers for each element. However, there is a class of compounds, called non-stoichiometric compounds, that cannot be represented by small integers. Such a formula might be written using decimal fractions, as in Fe0.95O, or it might include a variable part represented by a letter, as in Fe1−xO, where x is normally much less than 1.",
"title": "Non-stoichiometric chemical formulae"
},
{
"paragraph_id": 34,
"text": "A chemical formula used for a series of compounds that differ from each other by a constant unit is called a general formula. It generates a homologous series of chemical formulae. For example, alcohols may be represented by the formula CnH2n + 1OH (n ≥ 1), giving the homologs methanol, ethanol, propanol for 1 ≤ n ≤ 3.",
"title": "General forms for organic compounds"
},
{
"paragraph_id": 35,
"text": "The Hill system (or Hill notation) is a system of writing empirical chemical formulae, molecular chemical formulae and components of a condensed formula such that the number of carbon atoms in a molecule is indicated first, the number of hydrogen atoms next, and then the number of all other chemical elements subsequently, in alphabetical order of the chemical symbols. When the formula contains no carbon, all the elements, including hydrogen, are listed alphabetically.",
"title": "Hill system"
},
{
"paragraph_id": 36,
"text": "By sorting formulae according to the number of atoms of each element present in the formula according to these rules, with differences in earlier elements or numbers being treated as more significant than differences in any later element or number—like sorting text strings into lexicographical order—it is possible to collate chemical formulae into what is known as Hill system order.",
"title": "Hill system"
},
{
"paragraph_id": 37,
"text": "The Hill system was first published by Edwin A. Hill of the United States Patent and Trademark Office in 1900. It is the most commonly used system in chemical databases and printed indexes to sort lists of compounds.",
"title": "Hill system"
},
{
"paragraph_id": 38,
"text": "A list of formulae in Hill system order is arranged alphabetically, as above, with single-letter elements coming before two-letter symbols when the symbols begin with the same letter (so \"B\" comes before \"Be\", which comes before \"Br\").",
"title": "Hill system"
},
{
"paragraph_id": 39,
"text": "The following example formulae are written using the Hill system, and listed in Hill order:",
"title": "Hill system"
}
] | In chemistry, a chemical formula is a way of presenting information about the chemical proportions of atoms that constitute a particular chemical compound or molecule, using chemical element symbols, numbers, and sometimes also other symbols, such as parentheses, dashes, brackets, commas and plus (+) and minus (−) signs. These are limited to a single typographic line of symbols, which may include subscripts and superscripts. A chemical formula is not a chemical name since it does not contain any words. Although a chemical formula may imply certain simple chemical structures, it is not the same as a full chemical structural formula. Chemical formulae can fully specify the structure of only the simplest of molecules and chemical substances, and are generally more limited in power than chemical names and structural formulae. The simplest types of chemical formulae are called empirical formulae, which use letters and numbers indicating the numerical proportions of atoms of each type. Molecular formulae indicate the simple numbers of each type of atom in a molecule, with no information on structure. For example, the empirical formula for glucose is CH2O, while its molecular formula is C6H12O6. Sometimes a chemical formula is complicated by being written as a condensed formula, which conveys additional information about the particular ways in which the atoms are chemically bonded together, either in covalent bonds, ionic bonds, or various combinations of these types. This is possible if the relevant bonding is easy to show in one dimension. An example is the condensed molecular/chemical formula for ethanol, which is CH3−CH2−OH or CH3CH2OH. However, even a condensed chemical formula is necessarily limited in its ability to show complex bonding relationships between atoms, especially atoms that have bonds to four or more different substituents. Since a chemical formula must be expressed as a single line of chemical element symbols, it often cannot be as informative as a true structural formula, which is a graphical representation of the spatial relationship between atoms in chemical compounds. For reasons of structural complexity, a single condensed chemical formula may correspond to different molecules, known as isomers. For example, glucose shares its molecular formula C6H12O6 with a number of other sugars, including fructose, galactose and mannose. Linear equivalent chemical names exist that can and do specify uniquely any complex structural formula, but such names must use many terms (words), rather than the simple element symbols, numbers, and simple typographical symbols that define a chemical formula. Chemical formulae may be used in chemical equations to describe chemical reactions and other chemical transformations, such as the dissolving of ionic compounds into solution. While, as noted, chemical formulae do not have the full power of structural formulae to show chemical relationships between atoms, they are sufficient to keep track of numbers of atoms and numbers of electrical charges in chemical reactions, thus balancing chemical equations so that these equations can be used in chemical problems involving conservation of atoms, and conservation of electric charge. | 2001-11-08T14:47:11Z | 2023-11-29T12:35:19Z | [
"Template:Elucidate",
"Template:Portal",
"Template:Wikidata property",
"Template:Cite book",
"Template:Short description",
"Template:Citation needed",
"Template:ComplexNuclide",
"Template:Reflist",
"Template:Cite journal",
"Template:Commons category-inline",
"Template:Pp-vandalism",
"Template:Chem2",
"Template:Anchor",
"Template:Image frame",
"Template:Snd",
"Template:Notelist",
"Template:Molecular visualization",
"Template:Molecules detected in outer space",
"Template:Infobox",
"Template:Authority control",
"Template:Cite web",
"Template:Main"
] | https://en.wikipedia.org/wiki/Chemical_formula |
7,044 | Beetle | Beetles are insects that form the order Coleoptera (/koʊliːˈɒptərə/), in the superorder Holometabola. Their front pair of wings are hardened into wing-cases, elytra, distinguishing them from most other insects. The Coleoptera, with about 400,000 described species, is the largest of all orders, constituting almost 40% of described insects and 25% of all known animal species; new species are discovered frequently, with estimates suggesting that there are between 0.9 and 2.1 million total species. Found in almost every habitat except the sea and the polar regions, they interact with their ecosystems in several ways: beetles often feed on plants and fungi, break down animal and plant debris, and eat other invertebrates. Some species are serious agricultural pests, such as the Colorado potato beetle, while others such as Coccinellidae (ladybirds or ladybugs) eat aphids, scale insects, thrips, and other plant-sucking insects that damage crops.
Beetles typically have a particularly hard exoskeleton including the elytra, though some such as the rove beetles have very short elytra while blister beetles have softer elytra. The general anatomy of a beetle is quite uniform and typical of insects, although there are several examples of novelty, such as adaptations in water beetles which trap air bubbles under the elytra for use while diving. Beetles are holometabolans, which means that they undergo complete metamorphosis, with a series of conspicuous and relatively abrupt changes in body structure between hatching and becoming adult after a relatively immobile pupal stage. Some, such as stag beetles, have a marked sexual dimorphism, the males possessing enormously enlarged mandibles which they use to fight other males. Many beetles are aposematic, with bright colors and patterns warning of their toxicity, while others are harmless Batesian mimics of such insects. Many beetles, including those that live in sandy places, have effective camouflage.
Beetles are prominent in human culture, from the sacred scarabs of ancient Egypt to beetlewing art and use as pets or fighting insects for entertainment and gambling. Many beetle groups are brightly and attractively colored making them objects of collection and decorative displays. Over 300 species are used as food, mostly as larvae; species widely consumed include mealworms and rhinoceros beetle larvae. However, the major impact of beetles on human life is as agricultural, forestry, and horticultural pests. Serious pests include the boll weevil of cotton, the Colorado potato beetle, the coconut hispine beetle, and the mountain pine beetle. Most beetles, however, do not cause economic damage and many, such as the lady beetles and dung beetles are beneficial by helping to control insect pests.
The name of the taxonomic order, Coleoptera, comes from the Greek koleopteros (κολεόπτερος), given to the group by Aristotle for their elytra, hardened shield-like forewings, from koleos, sheath, and pteron, wing. The English name beetle comes from the Old English word bitela, little biter, related to bītan (to bite), leading to Middle English betylle. Another Old English name for beetle is ċeafor, chafer, used in names such as cockchafer, from the Proto-Germanic *kebrô ("beetle"; compare German Käfer, Dutch kever, Afrikaans kewer).
Beetles are by far the largest order of insects: the roughly 400,000 species make up about 40% of all insect species so far described, and about 25% of all animal species. A 2015 study provided four independent estimates of the total number of beetle species, giving a mean estimate of some 1.5 million with a "surprisingly narrow range" spanning all four estimates from a minimum of 0.9 to a maximum of 2.1 million beetle species. The four estimates made use of host-specificity relationships (1.5 to 1.9 million), ratios with other taxa (0.9 to 1.2 million), plant:beetle ratios (1.2 to 1.3), and extrapolations based on body size by year of description (1.7 to 2.1 million).
This immense diversity led the evolutionary biologist J. B. S. Haldane to quip, when some theologians asked him what could be inferred about the mind of the Christian God from the works of His Creation, "An inordinate fondness for beetles".
However, the ranking of beetles as most diverse has been challenged. Multiple studies posit that Diptera (flies) and/or Hymenoptera (sawflies, wasps, ants and bees) may have more species.
Beetles are found in nearly all habitats, including freshwater and coastal habitats, wherever vegetative foliage is found, from trees and their bark to flowers, leaves, and underground near roots - even inside plants in galls, in every plant tissue, including dead or decaying ones. Tropical forest canopies have a large and diverse fauna of beetles, including Carabidae, Chrysomelidae, and Scarabaeidae.
The heaviest beetle, indeed the heaviest insect stage, is the larva of the goliath beetle, Goliathus goliatus, which can attain a mass of at least 115 g (4.1 oz) and a length of 11.5 cm (4.5 in). Adult male goliath beetles are the heaviest beetle in its adult stage, weighing 70–100 g (2.5–3.5 oz) and measuring up to 11 cm (4.3 in). Adult elephant beetles, Megasoma elephas and Megasoma actaeon often reach 50 g (1.8 oz) and 10 cm (3.9 in).
The longest beetle is the Hercules beetle Dynastes hercules, with a maximum overall length of at least 16.7 cm (6.6 in) including the very long pronotal horn. The smallest recorded beetle and the smallest free-living insect (as of 2015), is the featherwing beetle Scydosella musawasensis which may measure as little as 325 μm in length.
The oldest known beetle is Coleopsis, from the earliest Permian (Asselian) of Germany, around 295 million years ago. Early beetles from the Permian, which are collectively grouped into the "Protocoleoptera" are thought to have been xylophagous (wood eating) and wood boring. Fossils from this time have been found in Siberia and Europe, for instance in the red slate fossil beds of Niedermoschel near Mainz, Germany. Further fossils have been found in Obora, Czech Republic and Tshekarda in the Ural mountains, Russia. However, there are only a few fossils from North America before the middle Permian, although both Asia and North America had been united to Euramerica. The first discoveries from North America made in the Wellington Formation of Oklahoma were published in 2005 and 2008. The earliest members of modern beetle lineages appeared during the Late Permian. In the Permian–Triassic extinction event at the end of the Permian, most "protocoleopteran" lineages became extinct. Beetle diversity did not recover to pre-extinction levels until the Middle Triassic.
During the Jurassic (210 to 145 mya), there was a dramatic increase in the diversity of beetle families, including the development and growth of carnivorous and herbivorous species. The Chrysomeloidea diversified around the same time, feeding on a wide array of plant hosts from cycads and conifers to angiosperms. Close to the Upper Jurassic, the Cupedidae decreased, but the diversity of the early plant-eating species increased. Most recent plant-eating beetles feed on flowering plants or angiosperms, whose success contributed to a doubling of plant-eating species during the Middle Jurassic. However, the increase of the number of beetle families during the Cretaceous does not correlate with the increase of the number of angiosperm species. Around the same time, numerous primitive weevils (e.g. Curculionoidea) and click beetles (e.g. Elateroidea) appeared. The first jewel beetles (e.g. Buprestidae) are present, but they remained rare until the Cretaceous. The first scarab beetles were not coprophagous but presumably fed on rotting wood with the help of fungus; they are an early example of a mutualistic relationship.
There are more than 150 important fossil sites from the Jurassic, the majority in Eastern Europe and North Asia. Outstanding sites include Solnhofen in Upper Bavaria, Germany, Karatau in South Kazakhstan, the Yixian formation in Liaoning, North China, as well as the Jiulongshan formation and further fossil sites in Mongolia. In North America there are only a few sites with fossil records of insects from the Jurassic, namely the shell limestone deposits in the Hartford basin, the Deerfield basin and the Newark basin.
The Cretaceous saw the fragmenting of the southern landmass, with the opening of the southern Atlantic Ocean and the isolation of New Zealand, while South America, Antarctica, and Australia grew more distant. The diversity of Cupedidae and Archostemata decreased considerably. Predatory ground beetles (Carabidae) and rove beetles (Staphylinidae) began to distribute into different patterns; the Carabidae predominantly occurred in the warm regions, while the Staphylinidae and click beetles (Elateridae) preferred temperate climates. Likewise, predatory species of Cleroidea and Cucujoidea hunted their prey under the bark of trees together with the jewel beetles (Buprestidae). The diversity of jewel beetles increased rapidly, as they were the primary consumers of wood, while longhorn beetles (Cerambycidae) were rather rare: their diversity increased only towards the end of the Upper Cretaceous. The first coprophagous beetles are from the Upper Cretaceous and may have lived on the excrement of herbivorous dinosaurs. The first species where both larvae and adults are adapted to an aquatic lifestyle are found. Whirligig beetles (Gyrinidae) were moderately diverse, although other early beetles (e.g. Dytiscidae) were less, with the most widespread being the species of Coptoclavidae, which preyed on aquatic fly larvae. A 2020 review of the palaeoecological interpretations of fossil beetles from Cretaceous ambers has suggested that saproxylicity was the most common feeding strategy, with fungivorous species in particular appearing to dominate.
Many fossil sites worldwide contain beetles from the Cretaceous. Most are in Europe and Asia and belong to the temperate climate zone during the Cretaceous. Lower Cretaceous sites include the Crato fossil beds in the Araripe basin in the Ceará, North Brazil, as well as overlying Santana formation; the latter was near the equator at that time. In Spain, important sites are near Montsec and Las Hoyas. In Australia, the Koonwarra fossil beds of the Korumburra group, South Gippsland, Victoria, are noteworthy. Major sites from the Upper Cretaceous include Kzyl-Dzhar in South Kazakhstan and Arkagala in Russia.
Beetle fossils are abundant in the Cenozoic; by the Quaternary (up to 1.6 mya), fossil species are identical to living ones, while from the Late Miocene (5.7 mya) the fossils are still so close to modern forms that they are most likely the ancestors of living species. The large oscillations in climate during the Quaternary caused beetles to change their geographic distributions so much that current location gives little clue to the biogeographical history of a species. It is evident that geographic isolation of populations must often have been broken as insects moved under the influence of changing climate, causing mixing of gene pools, rapid evolution, and extinctions, especially in middle latitudes.
The very large number of beetle species poses special problems for classification. Some families contain tens of thousands of species, and need to be divided into subfamilies and tribes. Polyphaga is the largest suborder, containing more than 300,000 described species in more than 170 families, including rove beetles (Staphylinidae), scarab beetles (Scarabaeidae), blister beetles (Meloidae), stag beetles (Lucanidae) and true weevils (Curculionidae). These polyphagan beetle groups can be identified by the presence of cervical sclerites (hardened parts of the head used as points of attachment for muscles) absent in the other suborders. Adephaga contains about 10 families of largely predatory beetles, includes ground beetles (Carabidae), water beetles (Dytiscidae) and whirligig beetles (Gyrinidae). In these insects, the testes are tubular and the first abdominal sternum (a plate of the exoskeleton) is divided by the hind coxae (the basal joints of the beetle's legs). Archostemata contains four families of mainly wood-eating beetles, including reticulated beetles (Cupedidae) and the telephone-pole beetle. The Archostemata have an exposed plate called the metatrochantin in front of the basal segment or coxa of the hind leg. Myxophaga contains about 65 described species in four families, mostly very small, including Hydroscaphidae and the genus Sphaerius. The myxophagan beetles are small and mostly alga-feeders. Their mouthparts are characteristic in lacking galeae and having a mobile tooth on their left mandible.
The consistency of beetle morphology, in particular their possession of elytra, has long suggested that Coleoptera is monophyletic, though there have been doubts about the arrangement of the suborders, namely the Adephaga, Archostemata, Myxophaga and Polyphaga within that clade. The twisted-wing parasites, Strepsiptera, are thought to be a sister group to the beetles, having split from them in the Early Permian.
Molecular phylogenetic analysis confirms that the Coleoptera are monophyletic. Duane McKenna et al. (2015) used eight nuclear genes for 367 species from 172 of 183 Coleopteran families. They split the Adephaga into 2 clades, Hydradephaga and Geadephaga, broke up the Cucujoidea into 3 clades, and placed the Lymexyloidea within the Tenebrionoidea. The Polyphaga appear to date from the Triassic. Most extant beetle families appear to have arisen in the Cretaceous. The cladogram is based on McKenna (2015). The number of species in each group (mainly superfamilies) is shown in parentheses, and boldface if over 10,000. English common names are given where possible. Dates of origin of major groups are shown in italics in millions of years ago (mya).
Beetles are generally characterized by a particularly hard exoskeleton and hard forewings (elytra) not usable for flying. Almost all beetles have mandibles that move in a horizontal plane. The mouthparts are rarely suctorial, though they are sometimes reduced; the maxillae always bear palps. The antennae usually have 11 or fewer segments, except in some groups like the Cerambycidae (longhorn beetles) and the Rhipiceridae (cicada parasite beetles). The coxae of the legs are usually located recessed within a coxal cavity. The genitalic structures are telescoped into the last abdominal segment in all extant beetles. Beetle larvae can often be confused with those of other holometabolan groups. The beetle's exoskeleton is made up of numerous plates, called sclerites, separated by thin sutures. This design provides armored defenses while maintaining flexibility. The general anatomy of a beetle is quite uniform, although specific organs and appendages vary greatly in appearance and function between the many families in the order. Like all insects, beetles' bodies are divided into three sections: the head, the thorax, and the abdomen. Because there are so many species, identification is quite difficult, and relies on attributes including the shape of the antennae, the tarsal formulae and shapes of these small segments on the legs, the mouthparts, and the ventral plates (sterna, pleura, coxae). In many species accurate identification can only be made by examination of the unique male genitalic structures.
The head, having mouthparts projecting forward or sometimes downturned, is usually heavily sclerotized and is sometimes very large. The eyes are compound and may display remarkable adaptability, as in the case of the aquatic whirligig beetles (Gyrinidae), where they are split to allow a view both above and below the waterline. A few Longhorn beetles (Cerambycidae) and weevils as well as some fireflies (Rhagophthalmidae) have divided eyes, while many have eyes that are notched, and a few have ocelli, small, simple eyes usually farther back on the head (on the vertex); these are more common in larvae than in adults. The anatomical organization of the compound eyes may be modified and depends on whether a species is primarily crepuscular, or diurnally or nocturnally active. Ocelli are found in the adult carpet beetle (Dermestidae), some rove beetles (Omaliinae), and the Derodontidae.
Beetle antennae are primarily organs of sensory perception and can detect motion, odor and chemical substances, but may also be used to physically feel a beetle's environment. Beetle families may use antennae in different ways. For example, when moving quickly, tiger beetles may not be able to see very well and instead hold their antennae rigidly in front of them in order to avoid obstacles. Certain Cerambycidae use antennae to balance, and blister beetles may use them for grasping. Some aquatic beetle species may use antennae for gathering air and passing it under the body whilst submerged. Equally, some families use antennae during mating, and a few species use them for defense. In the cerambycid Onychocerus albitarsis, the antennae have venom injecting structures used in defense, which is unique among arthropods. Antennae vary greatly in form, sometimes between the sexes, but are often similar within any given family. Antennae may be clubbed, threadlike, angled, shaped like a string of beads, comb-like (either on one side or both, bipectinate), or toothed. The physical variation of antennae is important for the identification of many beetle groups. The Curculionidae have elbowed or geniculate antennae. Feather like flabellate antennae are a restricted form found in the Rhipiceridae and a few other families. The Silphidae have a capitate antennae with a spherical head at the tip. The Scarabaeidae typically have lamellate antennae with the terminal segments extended into long flat structures stacked together. The Carabidae typically have thread-like antennae. The antennae arises between the eye and the mandibles and in the Tenebrionidae, the antennae rise in front of a notch that breaks the usually circular outline of the compound eye. They are segmented and usually consist of 11 parts, the first part is called the scape and the second part is the pedicel. The other segments are jointly called the flagellum.
Beetles have mouthparts like those of grasshoppers. The mandibles appear as large pincers on the front of some beetles. The mandibles are a pair of hard, often tooth-like structures that move horizontally to grasp, crush, or cut food or enemies (see defence, below). Two pairs of finger-like appendages, the maxillary and labial palpi, are found around the mouth in most beetles, serving to move food into the mouth. In many species, the mandibles are sexually dimorphic, with those of the males enlarged enormously compared with those of females of the same species.
The thorax is segmented into the two discernible parts, the pro- and pterothorax. The pterothorax is the fused meso- and metathorax, which are commonly separated in other insect species, although flexibly articulate from the prothorax. When viewed from below, the thorax is that part from which all three pairs of legs and both pairs of wings arise. The abdomen is everything posterior to the thorax. When viewed from above, most beetles appear to have three clear sections, but this is deceptive: on the beetle's upper surface, the middle section is a hard plate called the pronotum, which is only the front part of the thorax; the back part of the thorax is concealed by the beetle's wings. This further segmentation is usually best seen on the abdomen.
The multisegmented legs end in two to five small segments called tarsi. Like many other insect orders, beetles have claws, usually one pair, on the end of the last tarsal segment of each leg. While most beetles use their legs for walking, legs have been variously adapted for other uses. Aquatic beetles including the Dytiscidae (diving beetles), Haliplidae, and many species of Hydrophilidae, the legs, often the last pair, are modified for swimming, typically with rows of long hairs. Male diving beetles have suctorial cups on their forelegs that they use to grasp females. Other beetles have fossorial legs widened and often spined for digging. Species with such adaptations are found among the scarabs, ground beetles, and clown beetles (Histeridae). The hind legs of some beetles, such as flea beetles (within Chrysomelidae) and flea weevils (within Curculionidae), have enlarged femurs that help them leap.
The forewings of beetles are not used for flight, but form elytra which cover the hind part of the body and protect the hindwings. The elytra are usually hard shell-like structures which must be raised to allow the hindwings to move for flight. However, in the soldier beetles (Cantharidae), the elytra are soft, earning this family the name of leatherwings. Other soft wing beetles include the net-winged beetle Calopteron discrepans, which has brittle wings that rupture easily in order to release chemicals for defense.
Beetles' flight wings are crossed with veins and are folded after landing, often along these veins, and stored below the elytra. A fold (jugum) of the membrane at the base of each wing is characteristic. Some beetles have lost the ability to fly. These include some ground beetles (Carabidae) and some true weevils (Curculionidae), as well as desert- and cave-dwelling species of other families. Many have the two elytra fused together, forming a solid shield over the abdomen. In a few families, both the ability to fly and the elytra have been lost, as in the glow-worms (Phengodidae), where the females resemble larvae throughout their lives. The presence of elytra and wings does not always indicate that the beetle will fly. For example, the tansy beetle walks between habitats despite being physically capable of flight.
The abdomen is the section behind the metathorax, made up of a series of rings, each with a hole for breathing and respiration, called a spiracle, composing three different segmented sclerites: the tergum, pleura, and the sternum. The tergum in almost all species is membranous, or usually soft and concealed by the wings and elytra when not in flight. The pleura are usually small or hidden in some species, with each pleuron having a single spiracle. The sternum is the most widely visible part of the abdomen, being a more or less sclerotized segment. The abdomen itself does not have any appendages, but some (for example, Mordellidae) have articulating sternal lobes.
The digestive system of beetles is primarily adapted for a herbivorous diet. Digestion takes place mostly in the anterior midgut, although in predatory groups like the Carabidae, most digestion occurs in the crop by means of midgut enzymes. In the Elateridae, the larvae are liquid feeders that extraorally digest their food by secreting enzymes. The alimentary canal basically consists of a short, narrow pharynx, a widened expansion, the crop, and a poorly developed gizzard. This is followed by the midgut, that varies in dimensions between species, with a large amount of cecum, and the hindgut, with varying lengths. There are typically four to six Malpighian tubules.
The nervous system in beetles contains all the types found in insects, varying between different species, from three thoracic and seven or eight abdominal ganglia which can be distinguished to that in which all the thoracic and abdominal ganglia are fused to form a composite structure.
Like most insects, beetles inhale air, for the oxygen it contains, and exhale carbon dioxide, via a tracheal system. Air enters the body through spiracles, and circulates within the haemocoel in a system of tracheae and tracheoles, through whose walls the gases can diffuse.
Diving beetles, such as the Dytiscidae, carry a bubble of air with them when they dive. Such a bubble may be contained under the elytra or against the body by specialized hydrophobic hairs. The bubble covers at least some of the spiracles, permitting air to enter the tracheae. The function of the bubble is not only to contain a store of air but to act as a physical gill. The air that it traps is in contact with oxygenated water, so as the animal's consumption depletes the oxygen in the bubble, more oxygen can diffuse in to replenish it. Carbon dioxide is more soluble in water than either oxygen or nitrogen, so it readily diffuses out faster than in. Nitrogen is the most plentiful gas in the bubble, and the least soluble, so it constitutes a relatively static component of the bubble and acts as a stable medium for respiratory gases to accumulate in and pass through. Occasional visits to the surface are sufficient for the beetle to re-establish the constitution of the bubble.
Like other insects, beetles have open circulatory systems, based on hemolymph rather than blood. As in other insects, a segmented tube-like heart is attached to the dorsal wall of the hemocoel. It has paired inlets or ostia at intervals down its length, and circulates the hemolymph from the main cavity of the haemocoel and out through the anterior cavity in the head.
Different glands are specialized for different pheromones to attract mates. Pheromones from species of Rutelinae are produced from epithelial cells lining the inner surface of the apical abdominal segments; amino acid-based pheromones of Melolonthinae are produced from eversible glands on the abdominal apex. Other species produce different types of pheromones. Dermestids produce esters, and species of Elateridae produce fatty acid-derived aldehydes and acetates. To attract a mate, fireflies (Lampyridae) use modified fat body cells with transparent surfaces backed with reflective uric acid crystals to produce light by bioluminescence. Light production is highly efficient, by oxidation of luciferin catalyzed by enzymes (luciferases) in the presence of adenosine triphosphate (ATP) and oxygen, producing oxyluciferin, carbon dioxide, and light.
Tympanal organs or hearing organs consist of a membrane (tympanum) stretched across a frame backed by an air sac and associated sensory neurons, are found in two families. Several species of the genus Cicindela (Carabidae) have hearing organs on the dorsal surfaces of their first abdominal segments beneath the wings; two tribes in the Dynastinae (within the Scarabaeidae) have hearing organs just beneath their pronotal shields or neck membranes. Both families are sensitive to ultrasonic frequencies, with strong evidence indicating they function to detect the presence of bats by their ultrasonic echolocation.
Beetles are members of the superorder Holometabola, and accordingly most of them undergo complete metamorphosis. The typical form of metamorphosis in beetles passes through four main stages: the egg, the larva, the pupa, and the imago or adult. The larvae are commonly called grubs and the pupa sometimes is called the chrysalis. In some species, the pupa may be enclosed in a cocoon constructed by the larva towards the end of its final instar. Some beetles, such as typical members of the families Meloidae and Rhipiphoridae, go further, undergoing hypermetamorphosis in which the first instar takes the form of a triungulin.
Some beetles have intricate mating behaviour. Pheromone communication is often important in locating a mate. Different species use different pheromones. Scarab beetles such as the Rutelinae use pheromones derived from fatty acid synthesis, while other scarabs such as the Melolonthinae use amino acids and terpenoids. Another way beetles find mates is seen in the fireflies (Lampyridae) which are bioluminescent, with abdominal light-producing organs. The males and females engage in a complex dialog before mating; each species has a unique combination of flight patterns, duration, composition, and intensity of the light produced.
Before mating, males and females may stridulate, or vibrate the objects they are on. In the Meloidae, the male climbs onto the dorsum of the female and strokes his antennae on her head, palps, and antennae. In Eupompha, the male draws his antennae along his longitudinal vertex. They may not mate at all if they do not perform the precopulatory ritual. This mating behavior may be different amongst dispersed populations of the same species. For example, the mating of a Russian population of tansy beetle (Chysolina graminis) is preceded by an elaborate ritual involving the male tapping the female's eyes, pronotum and antennae with its antennae, which is not evident in the population of this species in the United Kingdom.
Competition can play a part in the mating rituals of species such as burying beetles (Nicrophorus), the insects fighting to determine which can mate. Many male beetles are territorial and fiercely defend their territories from intruding males. In such species, the male often has horns on the head or thorax, making its body length greater than that of a female. Copulation is generally quick, but in some cases lasts for several hours. During copulation, sperm cells are transferred to the female to fertilize the egg.
Essentially all beetles lay eggs, though some myrmecophilous Aleocharinae and some Chrysomelinae which live in mountains or the subarctic are ovoviviparous, laying eggs which hatch almost immediately. Beetle eggs generally have smooth surfaces and are soft, though the Cupedidae have hard eggs. Eggs vary widely between species: the eggs tend to be small in species with many instars (larval stages), and in those that lay large numbers of eggs. A female may lay from several dozen to several thousand eggs during her lifetime, depending on the extent of parental care. This ranges from the simple laying of eggs under a leaf, to the parental care provided by scarab beetles, which house, feed and protect their young. The Attelabidae roll leaves and lay their eggs inside the roll for protection.
The larva is usually the principal feeding stage of the beetle life cycle. Larvae tend to feed voraciously once they emerge from their eggs. Some feed externally on plants, such as those of certain leaf beetles, while others feed within their food sources. Examples of internal feeders are most Buprestidae and longhorn beetles. The larvae of many beetle families are predatory like the adults (ground beetles, ladybirds, rove beetles). The larval period varies between species, but can be as long as several years. The larvae of skin beetles undergo a degree of reversed development when starved, and later grow back to the previously attained level of maturity. The cycle can be repeated many times (see Biological immortality). Larval morphology is highly varied amongst species, with well-developed and sclerotized heads, distinguishable thoracic and abdominal segments (usually the tenth, though sometimes the eighth or ninth).
Beetle larvae can be differentiated from other insect larvae by their hardened, often darkened heads, the presence of chewing mouthparts, and spiracles along the sides of their bodies. Like adult beetles, the larvae are varied in appearance, particularly between beetle families. Beetles with somewhat flattened, highly mobile larvae include the ground beetles and rove beetles; their larvae are described as campodeiform. Some beetle larvae resemble hardened worms with dark head capsules and minute legs. These are elateriform larvae, and are found in the click beetle (Elateridae) and darkling beetle (Tenebrionidae) families. Some elateriform larvae of click beetles are known as wireworms. Beetles in the Scarabaeoidea have short, thick larvae described as scarabaeiform, more commonly known as grubs.
All beetle larvae go through several instars, which are the developmental stages between each moult. In many species, the larvae simply increase in size with each successive instar as more food is consumed. In some cases, however, more dramatic changes occur. Among certain beetle families or genera, particularly those that exhibit parasitic lifestyles, the first instar (the planidium) is highly mobile to search out a host, while the following instars are more sedentary and remain on or within their host. This is known as hypermetamorphosis; it occurs in the Meloidae, Micromalthidae, and Ripiphoridae. The blister beetle Epicauta vittata (Meloidae), for example, has three distinct larval stages. Its first stage, the triungulin, has longer legs to go in search of the eggs of grasshoppers. After feeding for a week it moults to the second stage, called the caraboid stage, which resembles the larva of a carabid beetle. In another week it moults and assumes the appearance of a scarabaeid larva—the scarabaeidoid stage. Its penultimate larval stage is the pseudo-pupa or the coarcate larva, which will overwinter and pupate until the next spring.
The larval period can vary widely. A fungus feeding staphylinid Phanerota fasciata undergoes three moults in 3.2 days at room temperature while Anisotoma sp. (Leiodidae) completes its larval stage in the fruiting body of slime mold in 2 days and possibly represents the fastest growing beetles. Dermestid beetles, Trogoderma inclusum can remain in an extended larval state under unfavourable conditions, even reducing their size between moults. A larva is reported to have survived for 3.5 years in an enclosed container.
As with all holometabolans, beetle larvae pupate, and from these pupae emerge fully formed, sexually mature adult beetles, or imagos. Pupae never have mandibles (they are adecticous). In most pupae, the appendages are not attached to the body and are said to be exarate; in a few beetles (Staphylinidae, Ptiliidae etc.) the appendages are fused with the body (termed as obtect pupae).
Adults have extremely variable lifespans, from weeks to years, depending on the species. Some wood-boring beetles can have extremely long life-cycles. It is believed that when furniture or house timbers are infested by beetle larvae, the timber already contained the larvae when it was first sawn up. A birch bookcase 40 years old released adult Eburia quadrigeminata (Cerambycidae), while Buprestis aurulenta and other Buprestidae have been documented as emerging as much as 51 years after manufacture of wooden items.
The elytra allow beetles to both fly and move through confined spaces, doing so by folding the delicate wings under the elytra while not flying, and folding their wings out just before takeoff. The unfolding and folding of the wings is operated by muscles attached to the wing base; as long as the tension on the radial and cubital veins remains, the wings remain straight. Some beetle species (many Cetoniinae; some Scarabaeinae, Curculionidae and Buprestidae) fly with the elytra closed, with the metathoracic wings extended under the lateral elytra margins. The altitude reached by beetles in flight varies. One study investigating the flight altitude of the ladybird species Coccinella septempunctata and Harmonia axyridis using radar showed that, whilst the majority in flight over a single location were at 150–195 m above ground level, some reached altitudes of over 1100 m.
Many rove beetles have greatly reduced elytra, and while they are capable of flight, they most often move on the ground: their soft bodies and strong abdominal muscles make them flexible, easily able to wriggle into small cracks.
Aquatic beetles use several techniques for retaining air beneath the water's surface. Diving beetles (Dytiscidae) hold air between the abdomen and the elytra when diving. Hydrophilidae have hairs on their under surface that retain a layer of air against their bodies. Adult crawling water beetles use both their elytra and their hind coxae (the basal segment of the back legs) in air retention, while whirligig beetles simply carry an air bubble down with them whenever they dive.
Beetles have a variety of ways to communicate, including the use of pheromones. The mountain pine beetle emits a pheromone to attract other beetles to a tree. The mass of beetles are able to overcome the chemical defenses of the tree. After the tree's defenses have been exhausted, the beetles emit an anti-aggregation pheromone. This species can stridulate to communicate, but others may use sound to defend themselves when attacked.
Parental care is found in a few families of beetle, perhaps for protection against adverse conditions and predators. The rove beetle Bledius spectabilis lives in salt marshes, so the eggs and larvae are endangered by the rising tide. The maternal beetle patrols the eggs and larvae, burrowing to keep them from flooding and asphyxiating, and protects them from the predatory carabid beetle Dicheirotrichus gustavi and from the parasitoidal wasp Barycnemis blediator, which kills some 15% of the larvae.
Burying beetles are attentive parents, and participate in cooperative care and feeding of their offspring. Both parents work to bury small animal carcass to serve as a food resource for their young and build a brood chamber around it. The parents prepare the carcass and protect it from competitors and from early decomposition. After their eggs hatch, the parents keep the larvae clean of fungus and bacteria and help the larvae feed by regurgitating food for them.
Some dung beetles provide parental care, collecting herbivore dung and laying eggs within that food supply, an instance of mass provisioning. Some species do not leave after this stage, but remain to safeguard their offspring.
Most species of beetles do not display parental care behaviors after the eggs have been laid.
Subsociality, where females guard their offspring, is well-documented in two families of Chrysomelidae, Cassidinae and Chrysomelinae.
Eusociality involves cooperative brood care (including brood care of offspring from other individuals), overlapping generations within a colony of adults, and a division of labor into reproductive and non-reproductive groups. Few organisms outside Hymenoptera exhibit this behavior; the only beetle to do so is the weevil Austroplatypus incompertus. This Australian species lives in horizontal networks of tunnels, in the heartwood of Eucalyptus trees. It is one of more than 300 species of wood-boring Ambrosia beetles which distribute the spores of ambrosia fungi. The fungi grow in the beetles' tunnels, providing food for the beetles and their larvae; female offspring remain in the tunnels and maintain the fungal growth, probably never reproducing. Cooperative brood care is also found in the bess beetles (Passalidae) where the larvae feed on the semi-digested faeces of the adults.
Beetles are able to exploit a wide diversity of food sources available in their many habitats. Some are omnivores, eating both plants and animals. Other beetles are highly specialized in their diet. Many species of leaf beetles, longhorn beetles, and weevils are very host-specific, feeding on only a single species of plant. Ground beetles and rove beetles (Staphylinidae), among others, are primarily carnivorous and catch and consume many other arthropods and small prey, such as earthworms and snails. While most predatory beetles are generalists, a few species have more specific prey requirements or preferences. In some species, digestive ability relies upon a symbiotic relationship with fungi - some beetles have yeasts living their guts, including some yeasts previously undiscovered anywhere else.
Decaying organic matter is a primary diet for many species. This can range from dung, which is consumed by coprophagous species (such as certain scarab beetles in the Scarabaeidae), to dead animals, which are eaten by necrophagous species (such as the carrion beetles, Silphidae). Some beetles found in dung and carrion are in fact predatory. These include members of the Histeridae and Silphidae, preying on the larvae of coprophagous and necrophagous insects. Many beetles feed under bark, some feed on wood while others feed on fungi growing on wood or leaf-litter. Some beetles have special mycangia, structures for the transport of fungal spores.
Beetles, both adults and larvae, are the prey of many animal predators including mammals from bats to rodents, birds, lizards, amphibians, fishes, dragonflies, robberflies, reduviid bugs, ants, other beetles, and spiders. Beetles use a variety of anti-predator adaptations to defend themselves. These include camouflage and mimicry against predators that hunt by sight, toxicity, and defensive behaviour.
Camouflage is common and widespread among beetle families, especially those that feed on wood or vegetation, such as leaf beetles (Chrysomelidae, which are often green) and weevils. In some species, sculpturing or various colored scales or hairs cause beetles such as the avocado weevil Heilipus apiatus to resemble bird dung or other inedible objects. Many beetles that live in sandy environments blend in with the coloration of that substrate.
Some longhorn beetles (Cerambycidae) are effective Batesian mimics of wasps. Beetles may combine coloration with behavioural mimicry, acting like the wasps they already closely resemble. Many other beetles, including ladybirds, blister beetles, and lycid beetles secrete distasteful or toxic substances to make them unpalatable or poisonous, and are often aposematic, where bright or contrasting coloration warn off predators; many beetles and other insects mimic these chemically protected species.
Chemical defense is important in some species, usually being advertised by bright aposematic colors. Some Tenebrionidae use their posture for releasing noxious chemicals to warn off predators. Chemical defenses may serve purposes other than just protection from vertebrates, such as protection from a wide range of microbes. Some species sequester chemicals from the plants they feed on, incorporating them into their own defenses.
Other species have special glands to produce deterrent chemicals. The defensive glands of carabid ground beetles produce a variety of hydrocarbons, aldehydes, phenols, quinones, esters, and acids released from an opening at the end of the abdomen. African carabid beetles (for example, Anthia) employ the same chemicals as ants: formic acid. Bombardier beetles have well-developed pygidial glands that empty from the sides of the intersegment membranes between the seventh and eighth abdominal segments. The gland is made of two containing chambers, one for hydroquinones and hydrogen peroxide, the other holding hydrogen peroxide and catalase enzymes. These chemicals mix and result in an explosive ejection, reaching a temperature of around 100 °C (212 °F), with the breakdown of hydroquinone to hydrogen, oxygen, and quinone. The oxygen propels the noxious chemical spray as a jet that can be aimed accurately at predators.
Large ground-dwelling beetles such as Carabidae, the rhinoceros beetle and the longhorn beetles defend themselves using strong mandibles, or heavily sclerotised (armored) spines or horns to deter or fight off predators. Many species of weevil that feed out in the open on leaves of plants react to attack by employing a drop-off reflex. Some combine it with thanatosis, in which they close up their appendages and "play dead". The click beetles (Elateridae) can suddenly catapult themselves out of danger by releasing the energy stored by a click mechanism, which consists of a stout spine on the prosternum and a matching groove in the mesosternum. Some species startle an attacker by producing sounds through a process known as stridulation.
A few species of beetles are ectoparasitic on mammals. One such species, Platypsyllus castoris, parasitises beavers (Castor spp.). This beetle lives as a parasite both as a larva and as an adult, feeding on epidermal tissue and possibly on skin secretions and wound exudates. They are strikingly flattened dorsoventrally, no doubt as an adaptation for slipping between the beavers' hairs. They are wingless and eyeless, as are many other ectoparasites. Others are kleptoparasites of other invertebrates, such as the small hive beetle (Aethina tumida) that infests honey bee nests, while many species are parasitic inquilines or commensal in the nests of ants. A few groups of beetles are primary parasitoids of other insects, feeding off of, and eventually killing their hosts.
Beetle-pollinated flowers are usually large, greenish or off-white in color, and heavily scented. Scents may be spicy, fruity, or similar to decaying organic material. Beetles were most likely the first insects to pollinate flowers. Most beetle-pollinated flowers are flattened or dish-shaped, with pollen easily accessible, although they may include traps to keep the beetle longer. The plants' ovaries are usually well protected from the biting mouthparts of their pollinators. The beetle families that habitually pollinate flowers are the Buprestidae, Cantharidae, Cerambycidae, Cleridae, Dermestidae, Lycidae, Melyridae, Mordellidae, Nitidulidae and Scarabaeidae. Beetles may be particularly important in some parts of the world such as semiarid areas of southern Africa and southern California and the montane grasslands of KwaZulu-Natal in South Africa.
Mutualism is well known in a few beetles, such as the ambrosia beetle, which partners with fungi to digest the wood of dead trees. The beetles excavate tunnels in dead trees in which they cultivate fungal gardens, their sole source of nutrition. After landing on a suitable tree, an ambrosia beetle excavates a tunnel in which it releases spores of its fungal symbiont. The fungus penetrates the plant's xylem tissue, digests it, and concentrates the nutrients on and near the surface of the beetle gallery, so the weevils and the fungus both benefit. The beetles cannot eat the wood due to toxins, and uses its relationship with fungi to help overcome the defenses of its host tree in order to provide nutrition for their larvae. Chemically mediated by a bacterially produced polyunsaturated peroxide, this mutualistic relationship between the beetle and the fungus is coevolved.
About 90% of beetle species enter a period of adult diapause, a quiet phase with reduced metabolism to tide unfavourable environmental conditions. Adult diapause is the most common form of diapause in Coleoptera. To endure the period without food (often lasting many months) adults prepare by accumulating reserves of lipids, glycogen, proteins and other substances needed for resistance to future hazardous changes of environmental conditions. This diapause is induced by signals heralding the arrival of the unfavourable season; usually the cue is photoperiodic. Short (decreasing) day length serves as a signal of approaching winter and induces winter diapause (hibernation). A study of hibernation in the Arctic beetle Pterostichus brevicornis showed that the body fat levels of adults were highest in autumn with the alimentary canal filled with food, but empty by the end of January. This loss of body fat was a gradual process, occurring in combination with dehydration.
All insects are poikilothermic, so the ability of a few beetles to live in extreme environments depends on their resilience to unusually high or low temperatures. The bark beetle Pityogenes chalcographus can survive −39°C whilst overwintering beneath tree bark; the Alaskan beetle Cucujus clavipes puniceus is able to withstand −58°C; its larvae may survive −100°C. At these low temperatures, the formation of ice crystals in internal fluids is the biggest threat to survival to beetles, but this is prevented through the production of antifreeze proteins that stop water molecules from grouping together. The low temperatures experienced by Cucujus clavipes can be survived through their deliberate dehydration in conjunction with the antifreeze proteins. This concentrates the antifreezes several fold. The hemolymph of the mealworm beetle Tenebrio molitor contains several antifreeze proteins. The Alaskan beetle Upis ceramboides can survive −60 °C: its cryoprotectants are xylomannan, a molecule consisting of a sugar bound to a fatty acid, and the sugar-alcohol, threitol.
Conversely, desert dwelling beetles are adapted to tolerate high temperatures. For example, the Tenebrionid beetle Onymacris rugatipennis can withstand 50°C. Tiger beetles in hot, sandy areas are often whitish (for example, Habroscelimorpha dorsalis), to reflect more heat than a darker color would. These beetles also exhibits behavioural adaptions to tolerate the heat: they are able to stand erect on their tarsi to hold their bodies away from the hot ground, seek shade, and turn to face the sun so that only the front parts of their heads are directly exposed.
The fogstand beetle of the Namib Desert, Stenocara gracilipes, is able to collect water from fog, as its elytra have a textured surface combining hydrophilic (water-loving) bumps and waxy, hydrophobic troughs. The beetle faces the early morning breeze, holding up its abdomen; droplets condense on the elytra and run along ridges towards their mouthparts. Similar adaptations are found in several other Namib desert beetles such as Onymacris unguicularis.
Some terrestrial beetles that exploit shoreline and floodplain habitats have physiological adaptations for surviving floods. In the event of flooding, adult beetles may be mobile enough to move away from flooding, but larvae and pupa often cannot. Adults of Cicindela togata are unable to survive immersion in water, but larvae are able to survive a prolonged period, up to 6 days, of anoxia during floods. Anoxia tolerance in the larvae may have been sustained by switching to anaerobic metabolic pathways or by reducing metabolic rate. Anoxia tolerance in the adult carabid beetle Pelophilia borealis was tested in laboratory conditions and it was found that they could survive a continuous period of up to 127 days in an atmosphere of 99.9% nitrogen at 0 °C.
Many beetle species undertake annual mass movements which are termed as migrations. These include the pollen beetle Meligethes aeneus and many species of coccinellids. These mass movements may also be opportunistic, in search of food, rather than seasonal. A 2008 study of an unusually large outbreak of Mountain Pine Beetle (Dendroctonus ponderosae) in British Columbia found that beetles were capable of flying 30–110 km per day in densities of up to 18,600 beetles per hectare.
Several species of dung beetle, especially the sacred scarab, Scarabaeus sacer, were revered in Ancient Egypt. The hieroglyphic image of the beetle may have had existential, fictional, or ontologic significance. Images of the scarab in bone, ivory, stone, Egyptian faience, and precious metals are known from the Sixth Dynasty and up to the period of Roman rule. The scarab was of prime significance in the funerary cult of ancient Egypt. The scarab was linked to Khepri, the god of the rising sun, from the supposed resemblance of the rolling of the dung ball by the beetle to the rolling of the sun by the god. Some of ancient Egypt's neighbors adopted the scarab motif for seals of varying types. The best-known of these are the Judean LMLK seals, where eight of 21 designs contained scarab beetles, which were used exclusively to stamp impressions on storage jars during the reign of Hezekiah. Beetles are mentioned as a symbol of the sun, as in ancient Egypt, in Plutarch's 1st century Moralia. The Greek Magical Papyri of the 2nd century BC to the 5th century AD describe scarabs as an ingredient in a spell.
Pliny the Elder discusses beetles in his Natural History, describing the stag beetle: "Some insects, for the preservation of their wings, are covered with an erust (elytra)—the beetle, for instance, the wing of which is peculiarly fine and frail. To these insects a sting has been denied by Nature; but in one large kind we find horns of a remarkable length, two-pronged at the extremities, and forming pincers, which the animal closes when it is its intention to bite." The stag beetle is recorded in a Greek myth by Nicander and recalled by Antoninus Liberalis in which Cerambus is turned into a beetle: "He can be seen on trunks and has hook-teeth, ever moving his jaws together. He is black, long and has hard wings like a great dung beetle". The story concludes with the comment that the beetles were used as toys by young boys, and that the head was removed and worn as a pendant.
About 75% of beetle species are phytophagous in both the larval and adult stages. Many feed on economically important plants and stored plant products, including trees, cereals, tobacco, and dried fruits. Some, such as the boll weevil, which feeds on cotton buds and flowers, can cause extremely serious damage to agriculture. The boll weevil crossed the Rio Grande near Brownsville, Texas, to enter the United States from Mexico around 1892, and had reached southeastern Alabama by 1915. By the mid-1920s, it had entered all cotton-growing regions in the US, traveling 40 to 160 miles (60–260 km) per year. It remains the most destructive cotton pest in North America. Mississippi State University has estimated, since the boll weevil entered the United States, it has cost cotton producers about $13 billion, and in recent times about $300 million per year.
The bark beetle, elm leaf beetle and the Asian longhorned beetle (Anoplophora glabripennis) are among the species that attack elm trees. Bark beetles (Scolytidae) carry Dutch elm disease as they move from infected breeding sites to healthy trees. The disease has devastated elm trees across Europe and North America.
Some species of beetle have evolved immunity to insecticides. For example, the Colorado potato beetle, Leptinotarsa decemlineata, is a destructive pest of potato plants. Its hosts include other members of the Solanaceae, such as nightshade, tomato, eggplant and capsicum, as well as the potato. Different populations have between them developed resistance to all major classes of insecticide. The Colorado potato beetle was evaluated as a tool of entomological warfare during World War II, the idea being to use the beetle and its larvae to damage the crops of enemy nations. Germany tested its Colorado potato beetle weaponisation program south of Frankfurt, releasing 54,000 beetles.
The death watch beetle, Xestobium rufovillosum (Ptinidae), is a serious pest of older wooden buildings in Europe. It attacks hardwoods such as oak and chestnut, always where some fungal decay has taken or is taking place. The actual introduction of the pest into buildings is thought to take place at the time of construction.
Other pests include the coconut hispine beetle, Brontispa longissima, which feeds on young leaves, seedlings and mature coconut trees, causing serious economic damage in the Philippines. The mountain pine beetle is a destructive pest of mature or weakened lodgepole pine, sometimes affecting large areas of Canada.
Beetles can be beneficial to human economics by controlling the populations of pests. The larvae and adults of some species of lady beetles (Coccinellidae) feed on aphids that are pests. Other lady beetles feed on scale insects, whitefly and mealybugs. If normal food sources are scarce, they may feed on small caterpillars, young plant bugs, or honeydew and nectar. Ground beetles (Carabidae) are common predators of many insect pests, including fly eggs, caterpillars, and wireworms. Ground beetles can help to control weeds by eating their seeds in the soil, reducing the need for herbicides to protect crops. The effectiveness of some species in reducing certain plant populations has resulted in the deliberate introduction of beetles in order to control weeds. For example, the genus Zygogramma is native to North America but has been used to control Parthenium hysterophorus in India and Ambrosia artemisiifolia in Russia.
Dung beetles (Scarabidae) have been successfully used to reduce the populations of pestilent flies, such as Musca vetustissima and Haematobia exigua which are serious pests of cattle in Australia. The beetles make the dung unavailable to breeding pests by quickly rolling and burying it in the soil, with the added effect of improving soil fertility, tilth, and nutrient cycling. The Australian Dung Beetle Project (1965–1985), introduced species of dung beetle to Australia from South Africa and Europe to reduce populations of Musca vetustissima, following successful trials of this technique in Hawaii. The American Institute of Biological Sciences reports that dung beetles save the United States cattle industry an estimated US$380 million annually through burying above-ground livestock feces.
The Dermestidae are often used in taxidermy and in the preparation of scientific specimens, to clean soft tissue from bones. Larvae feed on and remove cartilage along with other soft tissue.
Beetles are the most widely eaten insects, with about 344 species used as food, usually at the larval stage. The mealworm (the larva of the darkling beetle) and the rhinoceros beetle are among the species commonly eaten. A wide range of species is also used in folk medicine to treat those suffering from a variety of disorders and illnesses, though this is done without clinical studies supporting the efficacy of such treatments.
Due to their habitat specificity, many species of beetles have been suggested as suitable as indicators, their presence, numbers, or absence providing a measure of habitat quality. Predatory beetles such as the tiger beetles (Cicindelidae) have found scientific use as an indicator taxon for measuring regional patterns of biodiversity. They are suitable for this as their taxonomy is stable; their life history is well described; they are large and simple to observe when visiting a site; they occur around the world in many habitats, with species specialised to particular habitats; and their occurrence by species accurately indicates other species, both vertebrate and invertebrate. According to the habitats, many other groups such as the rove beetles in human-modified habitats, dung beetles in savannas and saproxylic beetles in forests have been suggested as potential indicator species.
Many beetles have durable elytra that has been used as material in art, with beetlewing the best example. Sometimes, they are incorporated into ritual objects for their religious significance. Whole beetles, either as-is or encased in clear plastic, are made into objects ranging from cheap souvenirs such as key chains to expensive fine-art jewellery. In parts of Mexico, beetles of the genus Zopherus are made into living brooches by attaching costume jewelry and golden chains, which is made possible by the incredibly hard elytra and sedentary habits of the genus.
Fighting beetles are used for entertainment and gambling. This sport exploits the territorial behavior and mating competition of certain species of large beetles. In the Chiang Mai district of northern Thailand, male Xylotrupes rhinoceros beetles are caught in the wild and trained for fighting. Females are held inside a log to stimulate the fighting males with their pheromones. These fights may be competitive and involve gambling both money and property. In South Korea the Dytiscidae species Cybister tripunctatus is used in a roulette-like game.
Beetles are sometimes used as instruments: the Onabasulu of Papua New Guinea historically used the "hugu" weevil Rhynchophorus ferrugineus as a musical instrument by letting the human mouth serve as a variable resonance chamber for the wing vibrations of the live adult beetle.
Some species of beetle are kept as pets, for example diving beetles (Dytiscidae) may be kept in a domestic fresh water tank.
In Japan the practice of keeping horned rhinoceros beetles (Dynastinae) and stag beetles (Lucanidae) is particularly popular amongst young boys. Such is the popularity in Japan that vending machines dispensing live beetles were developed in 1999, each holding up to 100 stag beetles.
Beetle collecting became extremely popular in the Victorian era. The naturalist Alfred Russel Wallace collected (by his own count) a total of 83,200 beetles during the eight years described in his 1869 book The Malay Archipelago, including 2,000 species new to science.
Several coleopteran adaptations have attracted interest in biomimetics with possible commercial applications. The bombardier beetle's powerful repellent spray has inspired the development of a fine mist spray technology, claimed to have a low carbon impact compared to aerosol sprays. Moisture harvesting behavior by the Namib desert beetle (Stenocara gracilipes) has inspired a self-filling water bottle which utilises hydrophilic and hydrophobic materials to benefit people living in dry regions with no regular rainfall.
Living beetles have been used as cyborgs. A Defense Advanced Research Projects Agency funded project implanted electrodes into Mecynorhina torquata beetles, allowing them to be remotely controlled via a radio receiver held on its back, as proof-of-concept for surveillance work. Similar technology has been applied to enable a human operator to control the free-flight steering and walking gaits of Mecynorhina torquata as well as graded turning and backward walking of Zophobas morio.
Research published in 2020 sought to create a robotic camera backpack for beetles. Miniature cameras weighing 248 mg were attached to live beetles of the Tenebrionid genera Asbolus and Eleodes. The cameras filmed over a 60° range for up to 6 hours.
Since beetles form such a large part of the world's biodiversity, their conservation is important, and equally, loss of habitat and biodiversity is essentially certain to impact on beetles. Many species of beetles have very specific habitats and long life cycles that make them vulnerable. Some species are highly threatened while others are already feared extinct. Island species tend to be more susceptible as in the case of Helictopleurus undatus of Madagascar which is thought to have gone extinct during the late 20th century. Conservationists have attempted to arouse a liking for beetles with flagship species like the stag beetle, Lucanus cervus, and tiger beetles (Cicindelidae). In Japan the Genji firefly, Luciola cruciata, is extremely popular, and in South Africa the Addo elephant dung beetle offers promise for broadening ecotourism beyond the big five tourist mammal species. Popular dislike of pest beetles, too, can be turned into public interest in insects, as can unusual ecological adaptations of species like the fairy shrimp hunting beetle, Cicinis bruchi. | [
{
"paragraph_id": 0,
"text": "Beetles are insects that form the order Coleoptera (/koʊliːˈɒptərə/), in the superorder Holometabola. Their front pair of wings are hardened into wing-cases, elytra, distinguishing them from most other insects. The Coleoptera, with about 400,000 described species, is the largest of all orders, constituting almost 40% of described insects and 25% of all known animal species; new species are discovered frequently, with estimates suggesting that there are between 0.9 and 2.1 million total species. Found in almost every habitat except the sea and the polar regions, they interact with their ecosystems in several ways: beetles often feed on plants and fungi, break down animal and plant debris, and eat other invertebrates. Some species are serious agricultural pests, such as the Colorado potato beetle, while others such as Coccinellidae (ladybirds or ladybugs) eat aphids, scale insects, thrips, and other plant-sucking insects that damage crops.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Beetles typically have a particularly hard exoskeleton including the elytra, though some such as the rove beetles have very short elytra while blister beetles have softer elytra. The general anatomy of a beetle is quite uniform and typical of insects, although there are several examples of novelty, such as adaptations in water beetles which trap air bubbles under the elytra for use while diving. Beetles are holometabolans, which means that they undergo complete metamorphosis, with a series of conspicuous and relatively abrupt changes in body structure between hatching and becoming adult after a relatively immobile pupal stage. Some, such as stag beetles, have a marked sexual dimorphism, the males possessing enormously enlarged mandibles which they use to fight other males. Many beetles are aposematic, with bright colors and patterns warning of their toxicity, while others are harmless Batesian mimics of such insects. Many beetles, including those that live in sandy places, have effective camouflage.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Beetles are prominent in human culture, from the sacred scarabs of ancient Egypt to beetlewing art and use as pets or fighting insects for entertainment and gambling. Many beetle groups are brightly and attractively colored making them objects of collection and decorative displays. Over 300 species are used as food, mostly as larvae; species widely consumed include mealworms and rhinoceros beetle larvae. However, the major impact of beetles on human life is as agricultural, forestry, and horticultural pests. Serious pests include the boll weevil of cotton, the Colorado potato beetle, the coconut hispine beetle, and the mountain pine beetle. Most beetles, however, do not cause economic damage and many, such as the lady beetles and dung beetles are beneficial by helping to control insect pests.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The name of the taxonomic order, Coleoptera, comes from the Greek koleopteros (κολεόπτερος), given to the group by Aristotle for their elytra, hardened shield-like forewings, from koleos, sheath, and pteron, wing. The English name beetle comes from the Old English word bitela, little biter, related to bītan (to bite), leading to Middle English betylle. Another Old English name for beetle is ċeafor, chafer, used in names such as cockchafer, from the Proto-Germanic *kebrô (\"beetle\"; compare German Käfer, Dutch kever, Afrikaans kewer).",
"title": "Etymology"
},
{
"paragraph_id": 4,
"text": "Beetles are by far the largest order of insects: the roughly 400,000 species make up about 40% of all insect species so far described, and about 25% of all animal species. A 2015 study provided four independent estimates of the total number of beetle species, giving a mean estimate of some 1.5 million with a \"surprisingly narrow range\" spanning all four estimates from a minimum of 0.9 to a maximum of 2.1 million beetle species. The four estimates made use of host-specificity relationships (1.5 to 1.9 million), ratios with other taxa (0.9 to 1.2 million), plant:beetle ratios (1.2 to 1.3), and extrapolations based on body size by year of description (1.7 to 2.1 million).",
"title": "Distribution and diversity"
},
{
"paragraph_id": 5,
"text": "This immense diversity led the evolutionary biologist J. B. S. Haldane to quip, when some theologians asked him what could be inferred about the mind of the Christian God from the works of His Creation, \"An inordinate fondness for beetles\".",
"title": "Distribution and diversity"
},
{
"paragraph_id": 6,
"text": "However, the ranking of beetles as most diverse has been challenged. Multiple studies posit that Diptera (flies) and/or Hymenoptera (sawflies, wasps, ants and bees) may have more species.",
"title": "Distribution and diversity"
},
{
"paragraph_id": 7,
"text": "Beetles are found in nearly all habitats, including freshwater and coastal habitats, wherever vegetative foliage is found, from trees and their bark to flowers, leaves, and underground near roots - even inside plants in galls, in every plant tissue, including dead or decaying ones. Tropical forest canopies have a large and diverse fauna of beetles, including Carabidae, Chrysomelidae, and Scarabaeidae.",
"title": "Distribution and diversity"
},
{
"paragraph_id": 8,
"text": "The heaviest beetle, indeed the heaviest insect stage, is the larva of the goliath beetle, Goliathus goliatus, which can attain a mass of at least 115 g (4.1 oz) and a length of 11.5 cm (4.5 in). Adult male goliath beetles are the heaviest beetle in its adult stage, weighing 70–100 g (2.5–3.5 oz) and measuring up to 11 cm (4.3 in). Adult elephant beetles, Megasoma elephas and Megasoma actaeon often reach 50 g (1.8 oz) and 10 cm (3.9 in).",
"title": "Distribution and diversity"
},
{
"paragraph_id": 9,
"text": "The longest beetle is the Hercules beetle Dynastes hercules, with a maximum overall length of at least 16.7 cm (6.6 in) including the very long pronotal horn. The smallest recorded beetle and the smallest free-living insect (as of 2015), is the featherwing beetle Scydosella musawasensis which may measure as little as 325 μm in length.",
"title": "Distribution and diversity"
},
{
"paragraph_id": 10,
"text": "The oldest known beetle is Coleopsis, from the earliest Permian (Asselian) of Germany, around 295 million years ago. Early beetles from the Permian, which are collectively grouped into the \"Protocoleoptera\" are thought to have been xylophagous (wood eating) and wood boring. Fossils from this time have been found in Siberia and Europe, for instance in the red slate fossil beds of Niedermoschel near Mainz, Germany. Further fossils have been found in Obora, Czech Republic and Tshekarda in the Ural mountains, Russia. However, there are only a few fossils from North America before the middle Permian, although both Asia and North America had been united to Euramerica. The first discoveries from North America made in the Wellington Formation of Oklahoma were published in 2005 and 2008. The earliest members of modern beetle lineages appeared during the Late Permian. In the Permian–Triassic extinction event at the end of the Permian, most \"protocoleopteran\" lineages became extinct. Beetle diversity did not recover to pre-extinction levels until the Middle Triassic.",
"title": "Evolution"
},
{
"paragraph_id": 11,
"text": "During the Jurassic (210 to 145 mya), there was a dramatic increase in the diversity of beetle families, including the development and growth of carnivorous and herbivorous species. The Chrysomeloidea diversified around the same time, feeding on a wide array of plant hosts from cycads and conifers to angiosperms. Close to the Upper Jurassic, the Cupedidae decreased, but the diversity of the early plant-eating species increased. Most recent plant-eating beetles feed on flowering plants or angiosperms, whose success contributed to a doubling of plant-eating species during the Middle Jurassic. However, the increase of the number of beetle families during the Cretaceous does not correlate with the increase of the number of angiosperm species. Around the same time, numerous primitive weevils (e.g. Curculionoidea) and click beetles (e.g. Elateroidea) appeared. The first jewel beetles (e.g. Buprestidae) are present, but they remained rare until the Cretaceous. The first scarab beetles were not coprophagous but presumably fed on rotting wood with the help of fungus; they are an early example of a mutualistic relationship.",
"title": "Evolution"
},
{
"paragraph_id": 12,
"text": "There are more than 150 important fossil sites from the Jurassic, the majority in Eastern Europe and North Asia. Outstanding sites include Solnhofen in Upper Bavaria, Germany, Karatau in South Kazakhstan, the Yixian formation in Liaoning, North China, as well as the Jiulongshan formation and further fossil sites in Mongolia. In North America there are only a few sites with fossil records of insects from the Jurassic, namely the shell limestone deposits in the Hartford basin, the Deerfield basin and the Newark basin.",
"title": "Evolution"
},
{
"paragraph_id": 13,
"text": "The Cretaceous saw the fragmenting of the southern landmass, with the opening of the southern Atlantic Ocean and the isolation of New Zealand, while South America, Antarctica, and Australia grew more distant. The diversity of Cupedidae and Archostemata decreased considerably. Predatory ground beetles (Carabidae) and rove beetles (Staphylinidae) began to distribute into different patterns; the Carabidae predominantly occurred in the warm regions, while the Staphylinidae and click beetles (Elateridae) preferred temperate climates. Likewise, predatory species of Cleroidea and Cucujoidea hunted their prey under the bark of trees together with the jewel beetles (Buprestidae). The diversity of jewel beetles increased rapidly, as they were the primary consumers of wood, while longhorn beetles (Cerambycidae) were rather rare: their diversity increased only towards the end of the Upper Cretaceous. The first coprophagous beetles are from the Upper Cretaceous and may have lived on the excrement of herbivorous dinosaurs. The first species where both larvae and adults are adapted to an aquatic lifestyle are found. Whirligig beetles (Gyrinidae) were moderately diverse, although other early beetles (e.g. Dytiscidae) were less, with the most widespread being the species of Coptoclavidae, which preyed on aquatic fly larvae. A 2020 review of the palaeoecological interpretations of fossil beetles from Cretaceous ambers has suggested that saproxylicity was the most common feeding strategy, with fungivorous species in particular appearing to dominate.",
"title": "Evolution"
},
{
"paragraph_id": 14,
"text": "Many fossil sites worldwide contain beetles from the Cretaceous. Most are in Europe and Asia and belong to the temperate climate zone during the Cretaceous. Lower Cretaceous sites include the Crato fossil beds in the Araripe basin in the Ceará, North Brazil, as well as overlying Santana formation; the latter was near the equator at that time. In Spain, important sites are near Montsec and Las Hoyas. In Australia, the Koonwarra fossil beds of the Korumburra group, South Gippsland, Victoria, are noteworthy. Major sites from the Upper Cretaceous include Kzyl-Dzhar in South Kazakhstan and Arkagala in Russia.",
"title": "Evolution"
},
{
"paragraph_id": 15,
"text": "Beetle fossils are abundant in the Cenozoic; by the Quaternary (up to 1.6 mya), fossil species are identical to living ones, while from the Late Miocene (5.7 mya) the fossils are still so close to modern forms that they are most likely the ancestors of living species. The large oscillations in climate during the Quaternary caused beetles to change their geographic distributions so much that current location gives little clue to the biogeographical history of a species. It is evident that geographic isolation of populations must often have been broken as insects moved under the influence of changing climate, causing mixing of gene pools, rapid evolution, and extinctions, especially in middle latitudes.",
"title": "Evolution"
},
{
"paragraph_id": 16,
"text": "The very large number of beetle species poses special problems for classification. Some families contain tens of thousands of species, and need to be divided into subfamilies and tribes. Polyphaga is the largest suborder, containing more than 300,000 described species in more than 170 families, including rove beetles (Staphylinidae), scarab beetles (Scarabaeidae), blister beetles (Meloidae), stag beetles (Lucanidae) and true weevils (Curculionidae). These polyphagan beetle groups can be identified by the presence of cervical sclerites (hardened parts of the head used as points of attachment for muscles) absent in the other suborders. Adephaga contains about 10 families of largely predatory beetles, includes ground beetles (Carabidae), water beetles (Dytiscidae) and whirligig beetles (Gyrinidae). In these insects, the testes are tubular and the first abdominal sternum (a plate of the exoskeleton) is divided by the hind coxae (the basal joints of the beetle's legs). Archostemata contains four families of mainly wood-eating beetles, including reticulated beetles (Cupedidae) and the telephone-pole beetle. The Archostemata have an exposed plate called the metatrochantin in front of the basal segment or coxa of the hind leg. Myxophaga contains about 65 described species in four families, mostly very small, including Hydroscaphidae and the genus Sphaerius. The myxophagan beetles are small and mostly alga-feeders. Their mouthparts are characteristic in lacking galeae and having a mobile tooth on their left mandible.",
"title": "Phylogeny"
},
{
"paragraph_id": 17,
"text": "The consistency of beetle morphology, in particular their possession of elytra, has long suggested that Coleoptera is monophyletic, though there have been doubts about the arrangement of the suborders, namely the Adephaga, Archostemata, Myxophaga and Polyphaga within that clade. The twisted-wing parasites, Strepsiptera, are thought to be a sister group to the beetles, having split from them in the Early Permian.",
"title": "Phylogeny"
},
{
"paragraph_id": 18,
"text": "Molecular phylogenetic analysis confirms that the Coleoptera are monophyletic. Duane McKenna et al. (2015) used eight nuclear genes for 367 species from 172 of 183 Coleopteran families. They split the Adephaga into 2 clades, Hydradephaga and Geadephaga, broke up the Cucujoidea into 3 clades, and placed the Lymexyloidea within the Tenebrionoidea. The Polyphaga appear to date from the Triassic. Most extant beetle families appear to have arisen in the Cretaceous. The cladogram is based on McKenna (2015). The number of species in each group (mainly superfamilies) is shown in parentheses, and boldface if over 10,000. English common names are given where possible. Dates of origin of major groups are shown in italics in millions of years ago (mya).",
"title": "Phylogeny"
},
{
"paragraph_id": 19,
"text": "Beetles are generally characterized by a particularly hard exoskeleton and hard forewings (elytra) not usable for flying. Almost all beetles have mandibles that move in a horizontal plane. The mouthparts are rarely suctorial, though they are sometimes reduced; the maxillae always bear palps. The antennae usually have 11 or fewer segments, except in some groups like the Cerambycidae (longhorn beetles) and the Rhipiceridae (cicada parasite beetles). The coxae of the legs are usually located recessed within a coxal cavity. The genitalic structures are telescoped into the last abdominal segment in all extant beetles. Beetle larvae can often be confused with those of other holometabolan groups. The beetle's exoskeleton is made up of numerous plates, called sclerites, separated by thin sutures. This design provides armored defenses while maintaining flexibility. The general anatomy of a beetle is quite uniform, although specific organs and appendages vary greatly in appearance and function between the many families in the order. Like all insects, beetles' bodies are divided into three sections: the head, the thorax, and the abdomen. Because there are so many species, identification is quite difficult, and relies on attributes including the shape of the antennae, the tarsal formulae and shapes of these small segments on the legs, the mouthparts, and the ventral plates (sterna, pleura, coxae). In many species accurate identification can only be made by examination of the unique male genitalic structures.",
"title": "External morphology"
},
{
"paragraph_id": 20,
"text": "The head, having mouthparts projecting forward or sometimes downturned, is usually heavily sclerotized and is sometimes very large. The eyes are compound and may display remarkable adaptability, as in the case of the aquatic whirligig beetles (Gyrinidae), where they are split to allow a view both above and below the waterline. A few Longhorn beetles (Cerambycidae) and weevils as well as some fireflies (Rhagophthalmidae) have divided eyes, while many have eyes that are notched, and a few have ocelli, small, simple eyes usually farther back on the head (on the vertex); these are more common in larvae than in adults. The anatomical organization of the compound eyes may be modified and depends on whether a species is primarily crepuscular, or diurnally or nocturnally active. Ocelli are found in the adult carpet beetle (Dermestidae), some rove beetles (Omaliinae), and the Derodontidae.",
"title": "External morphology"
},
{
"paragraph_id": 21,
"text": "Beetle antennae are primarily organs of sensory perception and can detect motion, odor and chemical substances, but may also be used to physically feel a beetle's environment. Beetle families may use antennae in different ways. For example, when moving quickly, tiger beetles may not be able to see very well and instead hold their antennae rigidly in front of them in order to avoid obstacles. Certain Cerambycidae use antennae to balance, and blister beetles may use them for grasping. Some aquatic beetle species may use antennae for gathering air and passing it under the body whilst submerged. Equally, some families use antennae during mating, and a few species use them for defense. In the cerambycid Onychocerus albitarsis, the antennae have venom injecting structures used in defense, which is unique among arthropods. Antennae vary greatly in form, sometimes between the sexes, but are often similar within any given family. Antennae may be clubbed, threadlike, angled, shaped like a string of beads, comb-like (either on one side or both, bipectinate), or toothed. The physical variation of antennae is important for the identification of many beetle groups. The Curculionidae have elbowed or geniculate antennae. Feather like flabellate antennae are a restricted form found in the Rhipiceridae and a few other families. The Silphidae have a capitate antennae with a spherical head at the tip. The Scarabaeidae typically have lamellate antennae with the terminal segments extended into long flat structures stacked together. The Carabidae typically have thread-like antennae. The antennae arises between the eye and the mandibles and in the Tenebrionidae, the antennae rise in front of a notch that breaks the usually circular outline of the compound eye. They are segmented and usually consist of 11 parts, the first part is called the scape and the second part is the pedicel. The other segments are jointly called the flagellum.",
"title": "External morphology"
},
{
"paragraph_id": 22,
"text": "Beetles have mouthparts like those of grasshoppers. The mandibles appear as large pincers on the front of some beetles. The mandibles are a pair of hard, often tooth-like structures that move horizontally to grasp, crush, or cut food or enemies (see defence, below). Two pairs of finger-like appendages, the maxillary and labial palpi, are found around the mouth in most beetles, serving to move food into the mouth. In many species, the mandibles are sexually dimorphic, with those of the males enlarged enormously compared with those of females of the same species.",
"title": "External morphology"
},
{
"paragraph_id": 23,
"text": "The thorax is segmented into the two discernible parts, the pro- and pterothorax. The pterothorax is the fused meso- and metathorax, which are commonly separated in other insect species, although flexibly articulate from the prothorax. When viewed from below, the thorax is that part from which all three pairs of legs and both pairs of wings arise. The abdomen is everything posterior to the thorax. When viewed from above, most beetles appear to have three clear sections, but this is deceptive: on the beetle's upper surface, the middle section is a hard plate called the pronotum, which is only the front part of the thorax; the back part of the thorax is concealed by the beetle's wings. This further segmentation is usually best seen on the abdomen.",
"title": "External morphology"
},
{
"paragraph_id": 24,
"text": "The multisegmented legs end in two to five small segments called tarsi. Like many other insect orders, beetles have claws, usually one pair, on the end of the last tarsal segment of each leg. While most beetles use their legs for walking, legs have been variously adapted for other uses. Aquatic beetles including the Dytiscidae (diving beetles), Haliplidae, and many species of Hydrophilidae, the legs, often the last pair, are modified for swimming, typically with rows of long hairs. Male diving beetles have suctorial cups on their forelegs that they use to grasp females. Other beetles have fossorial legs widened and often spined for digging. Species with such adaptations are found among the scarabs, ground beetles, and clown beetles (Histeridae). The hind legs of some beetles, such as flea beetles (within Chrysomelidae) and flea weevils (within Curculionidae), have enlarged femurs that help them leap.",
"title": "External morphology"
},
{
"paragraph_id": 25,
"text": "The forewings of beetles are not used for flight, but form elytra which cover the hind part of the body and protect the hindwings. The elytra are usually hard shell-like structures which must be raised to allow the hindwings to move for flight. However, in the soldier beetles (Cantharidae), the elytra are soft, earning this family the name of leatherwings. Other soft wing beetles include the net-winged beetle Calopteron discrepans, which has brittle wings that rupture easily in order to release chemicals for defense.",
"title": "External morphology"
},
{
"paragraph_id": 26,
"text": "Beetles' flight wings are crossed with veins and are folded after landing, often along these veins, and stored below the elytra. A fold (jugum) of the membrane at the base of each wing is characteristic. Some beetles have lost the ability to fly. These include some ground beetles (Carabidae) and some true weevils (Curculionidae), as well as desert- and cave-dwelling species of other families. Many have the two elytra fused together, forming a solid shield over the abdomen. In a few families, both the ability to fly and the elytra have been lost, as in the glow-worms (Phengodidae), where the females resemble larvae throughout their lives. The presence of elytra and wings does not always indicate that the beetle will fly. For example, the tansy beetle walks between habitats despite being physically capable of flight.",
"title": "External morphology"
},
{
"paragraph_id": 27,
"text": "The abdomen is the section behind the metathorax, made up of a series of rings, each with a hole for breathing and respiration, called a spiracle, composing three different segmented sclerites: the tergum, pleura, and the sternum. The tergum in almost all species is membranous, or usually soft and concealed by the wings and elytra when not in flight. The pleura are usually small or hidden in some species, with each pleuron having a single spiracle. The sternum is the most widely visible part of the abdomen, being a more or less sclerotized segment. The abdomen itself does not have any appendages, but some (for example, Mordellidae) have articulating sternal lobes.",
"title": "External morphology"
},
{
"paragraph_id": 28,
"text": "The digestive system of beetles is primarily adapted for a herbivorous diet. Digestion takes place mostly in the anterior midgut, although in predatory groups like the Carabidae, most digestion occurs in the crop by means of midgut enzymes. In the Elateridae, the larvae are liquid feeders that extraorally digest their food by secreting enzymes. The alimentary canal basically consists of a short, narrow pharynx, a widened expansion, the crop, and a poorly developed gizzard. This is followed by the midgut, that varies in dimensions between species, with a large amount of cecum, and the hindgut, with varying lengths. There are typically four to six Malpighian tubules.",
"title": "Anatomy and physiology"
},
{
"paragraph_id": 29,
"text": "The nervous system in beetles contains all the types found in insects, varying between different species, from three thoracic and seven or eight abdominal ganglia which can be distinguished to that in which all the thoracic and abdominal ganglia are fused to form a composite structure.",
"title": "Anatomy and physiology"
},
{
"paragraph_id": 30,
"text": "Like most insects, beetles inhale air, for the oxygen it contains, and exhale carbon dioxide, via a tracheal system. Air enters the body through spiracles, and circulates within the haemocoel in a system of tracheae and tracheoles, through whose walls the gases can diffuse.",
"title": "Anatomy and physiology"
},
{
"paragraph_id": 31,
"text": "Diving beetles, such as the Dytiscidae, carry a bubble of air with them when they dive. Such a bubble may be contained under the elytra or against the body by specialized hydrophobic hairs. The bubble covers at least some of the spiracles, permitting air to enter the tracheae. The function of the bubble is not only to contain a store of air but to act as a physical gill. The air that it traps is in contact with oxygenated water, so as the animal's consumption depletes the oxygen in the bubble, more oxygen can diffuse in to replenish it. Carbon dioxide is more soluble in water than either oxygen or nitrogen, so it readily diffuses out faster than in. Nitrogen is the most plentiful gas in the bubble, and the least soluble, so it constitutes a relatively static component of the bubble and acts as a stable medium for respiratory gases to accumulate in and pass through. Occasional visits to the surface are sufficient for the beetle to re-establish the constitution of the bubble.",
"title": "Anatomy and physiology"
},
{
"paragraph_id": 32,
"text": "Like other insects, beetles have open circulatory systems, based on hemolymph rather than blood. As in other insects, a segmented tube-like heart is attached to the dorsal wall of the hemocoel. It has paired inlets or ostia at intervals down its length, and circulates the hemolymph from the main cavity of the haemocoel and out through the anterior cavity in the head.",
"title": "Anatomy and physiology"
},
{
"paragraph_id": 33,
"text": "Different glands are specialized for different pheromones to attract mates. Pheromones from species of Rutelinae are produced from epithelial cells lining the inner surface of the apical abdominal segments; amino acid-based pheromones of Melolonthinae are produced from eversible glands on the abdominal apex. Other species produce different types of pheromones. Dermestids produce esters, and species of Elateridae produce fatty acid-derived aldehydes and acetates. To attract a mate, fireflies (Lampyridae) use modified fat body cells with transparent surfaces backed with reflective uric acid crystals to produce light by bioluminescence. Light production is highly efficient, by oxidation of luciferin catalyzed by enzymes (luciferases) in the presence of adenosine triphosphate (ATP) and oxygen, producing oxyluciferin, carbon dioxide, and light.",
"title": "Anatomy and physiology"
},
{
"paragraph_id": 34,
"text": "Tympanal organs or hearing organs consist of a membrane (tympanum) stretched across a frame backed by an air sac and associated sensory neurons, are found in two families. Several species of the genus Cicindela (Carabidae) have hearing organs on the dorsal surfaces of their first abdominal segments beneath the wings; two tribes in the Dynastinae (within the Scarabaeidae) have hearing organs just beneath their pronotal shields or neck membranes. Both families are sensitive to ultrasonic frequencies, with strong evidence indicating they function to detect the presence of bats by their ultrasonic echolocation.",
"title": "Anatomy and physiology"
},
{
"paragraph_id": 35,
"text": "Beetles are members of the superorder Holometabola, and accordingly most of them undergo complete metamorphosis. The typical form of metamorphosis in beetles passes through four main stages: the egg, the larva, the pupa, and the imago or adult. The larvae are commonly called grubs and the pupa sometimes is called the chrysalis. In some species, the pupa may be enclosed in a cocoon constructed by the larva towards the end of its final instar. Some beetles, such as typical members of the families Meloidae and Rhipiphoridae, go further, undergoing hypermetamorphosis in which the first instar takes the form of a triungulin.",
"title": "Reproduction and development"
},
{
"paragraph_id": 36,
"text": "Some beetles have intricate mating behaviour. Pheromone communication is often important in locating a mate. Different species use different pheromones. Scarab beetles such as the Rutelinae use pheromones derived from fatty acid synthesis, while other scarabs such as the Melolonthinae use amino acids and terpenoids. Another way beetles find mates is seen in the fireflies (Lampyridae) which are bioluminescent, with abdominal light-producing organs. The males and females engage in a complex dialog before mating; each species has a unique combination of flight patterns, duration, composition, and intensity of the light produced.",
"title": "Reproduction and development"
},
{
"paragraph_id": 37,
"text": "Before mating, males and females may stridulate, or vibrate the objects they are on. In the Meloidae, the male climbs onto the dorsum of the female and strokes his antennae on her head, palps, and antennae. In Eupompha, the male draws his antennae along his longitudinal vertex. They may not mate at all if they do not perform the precopulatory ritual. This mating behavior may be different amongst dispersed populations of the same species. For example, the mating of a Russian population of tansy beetle (Chysolina graminis) is preceded by an elaborate ritual involving the male tapping the female's eyes, pronotum and antennae with its antennae, which is not evident in the population of this species in the United Kingdom.",
"title": "Reproduction and development"
},
{
"paragraph_id": 38,
"text": "Competition can play a part in the mating rituals of species such as burying beetles (Nicrophorus), the insects fighting to determine which can mate. Many male beetles are territorial and fiercely defend their territories from intruding males. In such species, the male often has horns on the head or thorax, making its body length greater than that of a female. Copulation is generally quick, but in some cases lasts for several hours. During copulation, sperm cells are transferred to the female to fertilize the egg.",
"title": "Reproduction and development"
},
{
"paragraph_id": 39,
"text": "Essentially all beetles lay eggs, though some myrmecophilous Aleocharinae and some Chrysomelinae which live in mountains or the subarctic are ovoviviparous, laying eggs which hatch almost immediately. Beetle eggs generally have smooth surfaces and are soft, though the Cupedidae have hard eggs. Eggs vary widely between species: the eggs tend to be small in species with many instars (larval stages), and in those that lay large numbers of eggs. A female may lay from several dozen to several thousand eggs during her lifetime, depending on the extent of parental care. This ranges from the simple laying of eggs under a leaf, to the parental care provided by scarab beetles, which house, feed and protect their young. The Attelabidae roll leaves and lay their eggs inside the roll for protection.",
"title": "Reproduction and development"
},
{
"paragraph_id": 40,
"text": "The larva is usually the principal feeding stage of the beetle life cycle. Larvae tend to feed voraciously once they emerge from their eggs. Some feed externally on plants, such as those of certain leaf beetles, while others feed within their food sources. Examples of internal feeders are most Buprestidae and longhorn beetles. The larvae of many beetle families are predatory like the adults (ground beetles, ladybirds, rove beetles). The larval period varies between species, but can be as long as several years. The larvae of skin beetles undergo a degree of reversed development when starved, and later grow back to the previously attained level of maturity. The cycle can be repeated many times (see Biological immortality). Larval morphology is highly varied amongst species, with well-developed and sclerotized heads, distinguishable thoracic and abdominal segments (usually the tenth, though sometimes the eighth or ninth).",
"title": "Reproduction and development"
},
{
"paragraph_id": 41,
"text": "Beetle larvae can be differentiated from other insect larvae by their hardened, often darkened heads, the presence of chewing mouthparts, and spiracles along the sides of their bodies. Like adult beetles, the larvae are varied in appearance, particularly between beetle families. Beetles with somewhat flattened, highly mobile larvae include the ground beetles and rove beetles; their larvae are described as campodeiform. Some beetle larvae resemble hardened worms with dark head capsules and minute legs. These are elateriform larvae, and are found in the click beetle (Elateridae) and darkling beetle (Tenebrionidae) families. Some elateriform larvae of click beetles are known as wireworms. Beetles in the Scarabaeoidea have short, thick larvae described as scarabaeiform, more commonly known as grubs.",
"title": "Reproduction and development"
},
{
"paragraph_id": 42,
"text": "All beetle larvae go through several instars, which are the developmental stages between each moult. In many species, the larvae simply increase in size with each successive instar as more food is consumed. In some cases, however, more dramatic changes occur. Among certain beetle families or genera, particularly those that exhibit parasitic lifestyles, the first instar (the planidium) is highly mobile to search out a host, while the following instars are more sedentary and remain on or within their host. This is known as hypermetamorphosis; it occurs in the Meloidae, Micromalthidae, and Ripiphoridae. The blister beetle Epicauta vittata (Meloidae), for example, has three distinct larval stages. Its first stage, the triungulin, has longer legs to go in search of the eggs of grasshoppers. After feeding for a week it moults to the second stage, called the caraboid stage, which resembles the larva of a carabid beetle. In another week it moults and assumes the appearance of a scarabaeid larva—the scarabaeidoid stage. Its penultimate larval stage is the pseudo-pupa or the coarcate larva, which will overwinter and pupate until the next spring.",
"title": "Reproduction and development"
},
{
"paragraph_id": 43,
"text": "The larval period can vary widely. A fungus feeding staphylinid Phanerota fasciata undergoes three moults in 3.2 days at room temperature while Anisotoma sp. (Leiodidae) completes its larval stage in the fruiting body of slime mold in 2 days and possibly represents the fastest growing beetles. Dermestid beetles, Trogoderma inclusum can remain in an extended larval state under unfavourable conditions, even reducing their size between moults. A larva is reported to have survived for 3.5 years in an enclosed container.",
"title": "Reproduction and development"
},
{
"paragraph_id": 44,
"text": "As with all holometabolans, beetle larvae pupate, and from these pupae emerge fully formed, sexually mature adult beetles, or imagos. Pupae never have mandibles (they are adecticous). In most pupae, the appendages are not attached to the body and are said to be exarate; in a few beetles (Staphylinidae, Ptiliidae etc.) the appendages are fused with the body (termed as obtect pupae).",
"title": "Reproduction and development"
},
{
"paragraph_id": 45,
"text": "Adults have extremely variable lifespans, from weeks to years, depending on the species. Some wood-boring beetles can have extremely long life-cycles. It is believed that when furniture or house timbers are infested by beetle larvae, the timber already contained the larvae when it was first sawn up. A birch bookcase 40 years old released adult Eburia quadrigeminata (Cerambycidae), while Buprestis aurulenta and other Buprestidae have been documented as emerging as much as 51 years after manufacture of wooden items.",
"title": "Reproduction and development"
},
{
"paragraph_id": 46,
"text": "The elytra allow beetles to both fly and move through confined spaces, doing so by folding the delicate wings under the elytra while not flying, and folding their wings out just before takeoff. The unfolding and folding of the wings is operated by muscles attached to the wing base; as long as the tension on the radial and cubital veins remains, the wings remain straight. Some beetle species (many Cetoniinae; some Scarabaeinae, Curculionidae and Buprestidae) fly with the elytra closed, with the metathoracic wings extended under the lateral elytra margins. The altitude reached by beetles in flight varies. One study investigating the flight altitude of the ladybird species Coccinella septempunctata and Harmonia axyridis using radar showed that, whilst the majority in flight over a single location were at 150–195 m above ground level, some reached altitudes of over 1100 m.",
"title": "Behaviour"
},
{
"paragraph_id": 47,
"text": "Many rove beetles have greatly reduced elytra, and while they are capable of flight, they most often move on the ground: their soft bodies and strong abdominal muscles make them flexible, easily able to wriggle into small cracks.",
"title": "Behaviour"
},
{
"paragraph_id": 48,
"text": "Aquatic beetles use several techniques for retaining air beneath the water's surface. Diving beetles (Dytiscidae) hold air between the abdomen and the elytra when diving. Hydrophilidae have hairs on their under surface that retain a layer of air against their bodies. Adult crawling water beetles use both their elytra and their hind coxae (the basal segment of the back legs) in air retention, while whirligig beetles simply carry an air bubble down with them whenever they dive.",
"title": "Behaviour"
},
{
"paragraph_id": 49,
"text": "Beetles have a variety of ways to communicate, including the use of pheromones. The mountain pine beetle emits a pheromone to attract other beetles to a tree. The mass of beetles are able to overcome the chemical defenses of the tree. After the tree's defenses have been exhausted, the beetles emit an anti-aggregation pheromone. This species can stridulate to communicate, but others may use sound to defend themselves when attacked.",
"title": "Behaviour"
},
{
"paragraph_id": 50,
"text": "Parental care is found in a few families of beetle, perhaps for protection against adverse conditions and predators. The rove beetle Bledius spectabilis lives in salt marshes, so the eggs and larvae are endangered by the rising tide. The maternal beetle patrols the eggs and larvae, burrowing to keep them from flooding and asphyxiating, and protects them from the predatory carabid beetle Dicheirotrichus gustavi and from the parasitoidal wasp Barycnemis blediator, which kills some 15% of the larvae.",
"title": "Behaviour"
},
{
"paragraph_id": 51,
"text": "Burying beetles are attentive parents, and participate in cooperative care and feeding of their offspring. Both parents work to bury small animal carcass to serve as a food resource for their young and build a brood chamber around it. The parents prepare the carcass and protect it from competitors and from early decomposition. After their eggs hatch, the parents keep the larvae clean of fungus and bacteria and help the larvae feed by regurgitating food for them.",
"title": "Behaviour"
},
{
"paragraph_id": 52,
"text": "Some dung beetles provide parental care, collecting herbivore dung and laying eggs within that food supply, an instance of mass provisioning. Some species do not leave after this stage, but remain to safeguard their offspring.",
"title": "Behaviour"
},
{
"paragraph_id": 53,
"text": "Most species of beetles do not display parental care behaviors after the eggs have been laid.",
"title": "Behaviour"
},
{
"paragraph_id": 54,
"text": "Subsociality, where females guard their offspring, is well-documented in two families of Chrysomelidae, Cassidinae and Chrysomelinae.",
"title": "Behaviour"
},
{
"paragraph_id": 55,
"text": "Eusociality involves cooperative brood care (including brood care of offspring from other individuals), overlapping generations within a colony of adults, and a division of labor into reproductive and non-reproductive groups. Few organisms outside Hymenoptera exhibit this behavior; the only beetle to do so is the weevil Austroplatypus incompertus. This Australian species lives in horizontal networks of tunnels, in the heartwood of Eucalyptus trees. It is one of more than 300 species of wood-boring Ambrosia beetles which distribute the spores of ambrosia fungi. The fungi grow in the beetles' tunnels, providing food for the beetles and their larvae; female offspring remain in the tunnels and maintain the fungal growth, probably never reproducing. Cooperative brood care is also found in the bess beetles (Passalidae) where the larvae feed on the semi-digested faeces of the adults.",
"title": "Behaviour"
},
{
"paragraph_id": 56,
"text": "Beetles are able to exploit a wide diversity of food sources available in their many habitats. Some are omnivores, eating both plants and animals. Other beetles are highly specialized in their diet. Many species of leaf beetles, longhorn beetles, and weevils are very host-specific, feeding on only a single species of plant. Ground beetles and rove beetles (Staphylinidae), among others, are primarily carnivorous and catch and consume many other arthropods and small prey, such as earthworms and snails. While most predatory beetles are generalists, a few species have more specific prey requirements or preferences. In some species, digestive ability relies upon a symbiotic relationship with fungi - some beetles have yeasts living their guts, including some yeasts previously undiscovered anywhere else.",
"title": "Behaviour"
},
{
"paragraph_id": 57,
"text": "Decaying organic matter is a primary diet for many species. This can range from dung, which is consumed by coprophagous species (such as certain scarab beetles in the Scarabaeidae), to dead animals, which are eaten by necrophagous species (such as the carrion beetles, Silphidae). Some beetles found in dung and carrion are in fact predatory. These include members of the Histeridae and Silphidae, preying on the larvae of coprophagous and necrophagous insects. Many beetles feed under bark, some feed on wood while others feed on fungi growing on wood or leaf-litter. Some beetles have special mycangia, structures for the transport of fungal spores.",
"title": "Behaviour"
},
{
"paragraph_id": 58,
"text": "Beetles, both adults and larvae, are the prey of many animal predators including mammals from bats to rodents, birds, lizards, amphibians, fishes, dragonflies, robberflies, reduviid bugs, ants, other beetles, and spiders. Beetles use a variety of anti-predator adaptations to defend themselves. These include camouflage and mimicry against predators that hunt by sight, toxicity, and defensive behaviour.",
"title": "Ecology"
},
{
"paragraph_id": 59,
"text": "Camouflage is common and widespread among beetle families, especially those that feed on wood or vegetation, such as leaf beetles (Chrysomelidae, which are often green) and weevils. In some species, sculpturing or various colored scales or hairs cause beetles such as the avocado weevil Heilipus apiatus to resemble bird dung or other inedible objects. Many beetles that live in sandy environments blend in with the coloration of that substrate.",
"title": "Ecology"
},
{
"paragraph_id": 60,
"text": "Some longhorn beetles (Cerambycidae) are effective Batesian mimics of wasps. Beetles may combine coloration with behavioural mimicry, acting like the wasps they already closely resemble. Many other beetles, including ladybirds, blister beetles, and lycid beetles secrete distasteful or toxic substances to make them unpalatable or poisonous, and are often aposematic, where bright or contrasting coloration warn off predators; many beetles and other insects mimic these chemically protected species.",
"title": "Ecology"
},
{
"paragraph_id": 61,
"text": "Chemical defense is important in some species, usually being advertised by bright aposematic colors. Some Tenebrionidae use their posture for releasing noxious chemicals to warn off predators. Chemical defenses may serve purposes other than just protection from vertebrates, such as protection from a wide range of microbes. Some species sequester chemicals from the plants they feed on, incorporating them into their own defenses.",
"title": "Ecology"
},
{
"paragraph_id": 62,
"text": "Other species have special glands to produce deterrent chemicals. The defensive glands of carabid ground beetles produce a variety of hydrocarbons, aldehydes, phenols, quinones, esters, and acids released from an opening at the end of the abdomen. African carabid beetles (for example, Anthia) employ the same chemicals as ants: formic acid. Bombardier beetles have well-developed pygidial glands that empty from the sides of the intersegment membranes between the seventh and eighth abdominal segments. The gland is made of two containing chambers, one for hydroquinones and hydrogen peroxide, the other holding hydrogen peroxide and catalase enzymes. These chemicals mix and result in an explosive ejection, reaching a temperature of around 100 °C (212 °F), with the breakdown of hydroquinone to hydrogen, oxygen, and quinone. The oxygen propels the noxious chemical spray as a jet that can be aimed accurately at predators.",
"title": "Ecology"
},
{
"paragraph_id": 63,
"text": "Large ground-dwelling beetles such as Carabidae, the rhinoceros beetle and the longhorn beetles defend themselves using strong mandibles, or heavily sclerotised (armored) spines or horns to deter or fight off predators. Many species of weevil that feed out in the open on leaves of plants react to attack by employing a drop-off reflex. Some combine it with thanatosis, in which they close up their appendages and \"play dead\". The click beetles (Elateridae) can suddenly catapult themselves out of danger by releasing the energy stored by a click mechanism, which consists of a stout spine on the prosternum and a matching groove in the mesosternum. Some species startle an attacker by producing sounds through a process known as stridulation.",
"title": "Ecology"
},
{
"paragraph_id": 64,
"text": "A few species of beetles are ectoparasitic on mammals. One such species, Platypsyllus castoris, parasitises beavers (Castor spp.). This beetle lives as a parasite both as a larva and as an adult, feeding on epidermal tissue and possibly on skin secretions and wound exudates. They are strikingly flattened dorsoventrally, no doubt as an adaptation for slipping between the beavers' hairs. They are wingless and eyeless, as are many other ectoparasites. Others are kleptoparasites of other invertebrates, such as the small hive beetle (Aethina tumida) that infests honey bee nests, while many species are parasitic inquilines or commensal in the nests of ants. A few groups of beetles are primary parasitoids of other insects, feeding off of, and eventually killing their hosts.",
"title": "Ecology"
},
{
"paragraph_id": 65,
"text": "Beetle-pollinated flowers are usually large, greenish or off-white in color, and heavily scented. Scents may be spicy, fruity, or similar to decaying organic material. Beetles were most likely the first insects to pollinate flowers. Most beetle-pollinated flowers are flattened or dish-shaped, with pollen easily accessible, although they may include traps to keep the beetle longer. The plants' ovaries are usually well protected from the biting mouthparts of their pollinators. The beetle families that habitually pollinate flowers are the Buprestidae, Cantharidae, Cerambycidae, Cleridae, Dermestidae, Lycidae, Melyridae, Mordellidae, Nitidulidae and Scarabaeidae. Beetles may be particularly important in some parts of the world such as semiarid areas of southern Africa and southern California and the montane grasslands of KwaZulu-Natal in South Africa.",
"title": "Ecology"
},
{
"paragraph_id": 66,
"text": "Mutualism is well known in a few beetles, such as the ambrosia beetle, which partners with fungi to digest the wood of dead trees. The beetles excavate tunnels in dead trees in which they cultivate fungal gardens, their sole source of nutrition. After landing on a suitable tree, an ambrosia beetle excavates a tunnel in which it releases spores of its fungal symbiont. The fungus penetrates the plant's xylem tissue, digests it, and concentrates the nutrients on and near the surface of the beetle gallery, so the weevils and the fungus both benefit. The beetles cannot eat the wood due to toxins, and uses its relationship with fungi to help overcome the defenses of its host tree in order to provide nutrition for their larvae. Chemically mediated by a bacterially produced polyunsaturated peroxide, this mutualistic relationship between the beetle and the fungus is coevolved.",
"title": "Ecology"
},
{
"paragraph_id": 67,
"text": "About 90% of beetle species enter a period of adult diapause, a quiet phase with reduced metabolism to tide unfavourable environmental conditions. Adult diapause is the most common form of diapause in Coleoptera. To endure the period without food (often lasting many months) adults prepare by accumulating reserves of lipids, glycogen, proteins and other substances needed for resistance to future hazardous changes of environmental conditions. This diapause is induced by signals heralding the arrival of the unfavourable season; usually the cue is photoperiodic. Short (decreasing) day length serves as a signal of approaching winter and induces winter diapause (hibernation). A study of hibernation in the Arctic beetle Pterostichus brevicornis showed that the body fat levels of adults were highest in autumn with the alimentary canal filled with food, but empty by the end of January. This loss of body fat was a gradual process, occurring in combination with dehydration.",
"title": "Ecology"
},
{
"paragraph_id": 68,
"text": "All insects are poikilothermic, so the ability of a few beetles to live in extreme environments depends on their resilience to unusually high or low temperatures. The bark beetle Pityogenes chalcographus can survive −39°C whilst overwintering beneath tree bark; the Alaskan beetle Cucujus clavipes puniceus is able to withstand −58°C; its larvae may survive −100°C. At these low temperatures, the formation of ice crystals in internal fluids is the biggest threat to survival to beetles, but this is prevented through the production of antifreeze proteins that stop water molecules from grouping together. The low temperatures experienced by Cucujus clavipes can be survived through their deliberate dehydration in conjunction with the antifreeze proteins. This concentrates the antifreezes several fold. The hemolymph of the mealworm beetle Tenebrio molitor contains several antifreeze proteins. The Alaskan beetle Upis ceramboides can survive −60 °C: its cryoprotectants are xylomannan, a molecule consisting of a sugar bound to a fatty acid, and the sugar-alcohol, threitol.",
"title": "Ecology"
},
{
"paragraph_id": 69,
"text": "Conversely, desert dwelling beetles are adapted to tolerate high temperatures. For example, the Tenebrionid beetle Onymacris rugatipennis can withstand 50°C. Tiger beetles in hot, sandy areas are often whitish (for example, Habroscelimorpha dorsalis), to reflect more heat than a darker color would. These beetles also exhibits behavioural adaptions to tolerate the heat: they are able to stand erect on their tarsi to hold their bodies away from the hot ground, seek shade, and turn to face the sun so that only the front parts of their heads are directly exposed.",
"title": "Ecology"
},
{
"paragraph_id": 70,
"text": "The fogstand beetle of the Namib Desert, Stenocara gracilipes, is able to collect water from fog, as its elytra have a textured surface combining hydrophilic (water-loving) bumps and waxy, hydrophobic troughs. The beetle faces the early morning breeze, holding up its abdomen; droplets condense on the elytra and run along ridges towards their mouthparts. Similar adaptations are found in several other Namib desert beetles such as Onymacris unguicularis.",
"title": "Ecology"
},
{
"paragraph_id": 71,
"text": "Some terrestrial beetles that exploit shoreline and floodplain habitats have physiological adaptations for surviving floods. In the event of flooding, adult beetles may be mobile enough to move away from flooding, but larvae and pupa often cannot. Adults of Cicindela togata are unable to survive immersion in water, but larvae are able to survive a prolonged period, up to 6 days, of anoxia during floods. Anoxia tolerance in the larvae may have been sustained by switching to anaerobic metabolic pathways or by reducing metabolic rate. Anoxia tolerance in the adult carabid beetle Pelophilia borealis was tested in laboratory conditions and it was found that they could survive a continuous period of up to 127 days in an atmosphere of 99.9% nitrogen at 0 °C.",
"title": "Ecology"
},
{
"paragraph_id": 72,
"text": "Many beetle species undertake annual mass movements which are termed as migrations. These include the pollen beetle Meligethes aeneus and many species of coccinellids. These mass movements may also be opportunistic, in search of food, rather than seasonal. A 2008 study of an unusually large outbreak of Mountain Pine Beetle (Dendroctonus ponderosae) in British Columbia found that beetles were capable of flying 30–110 km per day in densities of up to 18,600 beetles per hectare.",
"title": "Ecology"
},
{
"paragraph_id": 73,
"text": "Several species of dung beetle, especially the sacred scarab, Scarabaeus sacer, were revered in Ancient Egypt. The hieroglyphic image of the beetle may have had existential, fictional, or ontologic significance. Images of the scarab in bone, ivory, stone, Egyptian faience, and precious metals are known from the Sixth Dynasty and up to the period of Roman rule. The scarab was of prime significance in the funerary cult of ancient Egypt. The scarab was linked to Khepri, the god of the rising sun, from the supposed resemblance of the rolling of the dung ball by the beetle to the rolling of the sun by the god. Some of ancient Egypt's neighbors adopted the scarab motif for seals of varying types. The best-known of these are the Judean LMLK seals, where eight of 21 designs contained scarab beetles, which were used exclusively to stamp impressions on storage jars during the reign of Hezekiah. Beetles are mentioned as a symbol of the sun, as in ancient Egypt, in Plutarch's 1st century Moralia. The Greek Magical Papyri of the 2nd century BC to the 5th century AD describe scarabs as an ingredient in a spell.",
"title": "Relationship to humans"
},
{
"paragraph_id": 74,
"text": "Pliny the Elder discusses beetles in his Natural History, describing the stag beetle: \"Some insects, for the preservation of their wings, are covered with an erust (elytra)—the beetle, for instance, the wing of which is peculiarly fine and frail. To these insects a sting has been denied by Nature; but in one large kind we find horns of a remarkable length, two-pronged at the extremities, and forming pincers, which the animal closes when it is its intention to bite.\" The stag beetle is recorded in a Greek myth by Nicander and recalled by Antoninus Liberalis in which Cerambus is turned into a beetle: \"He can be seen on trunks and has hook-teeth, ever moving his jaws together. He is black, long and has hard wings like a great dung beetle\". The story concludes with the comment that the beetles were used as toys by young boys, and that the head was removed and worn as a pendant.",
"title": "Relationship to humans"
},
{
"paragraph_id": 75,
"text": "About 75% of beetle species are phytophagous in both the larval and adult stages. Many feed on economically important plants and stored plant products, including trees, cereals, tobacco, and dried fruits. Some, such as the boll weevil, which feeds on cotton buds and flowers, can cause extremely serious damage to agriculture. The boll weevil crossed the Rio Grande near Brownsville, Texas, to enter the United States from Mexico around 1892, and had reached southeastern Alabama by 1915. By the mid-1920s, it had entered all cotton-growing regions in the US, traveling 40 to 160 miles (60–260 km) per year. It remains the most destructive cotton pest in North America. Mississippi State University has estimated, since the boll weevil entered the United States, it has cost cotton producers about $13 billion, and in recent times about $300 million per year.",
"title": "Relationship to humans"
},
{
"paragraph_id": 76,
"text": "The bark beetle, elm leaf beetle and the Asian longhorned beetle (Anoplophora glabripennis) are among the species that attack elm trees. Bark beetles (Scolytidae) carry Dutch elm disease as they move from infected breeding sites to healthy trees. The disease has devastated elm trees across Europe and North America.",
"title": "Relationship to humans"
},
{
"paragraph_id": 77,
"text": "Some species of beetle have evolved immunity to insecticides. For example, the Colorado potato beetle, Leptinotarsa decemlineata, is a destructive pest of potato plants. Its hosts include other members of the Solanaceae, such as nightshade, tomato, eggplant and capsicum, as well as the potato. Different populations have between them developed resistance to all major classes of insecticide. The Colorado potato beetle was evaluated as a tool of entomological warfare during World War II, the idea being to use the beetle and its larvae to damage the crops of enemy nations. Germany tested its Colorado potato beetle weaponisation program south of Frankfurt, releasing 54,000 beetles.",
"title": "Relationship to humans"
},
{
"paragraph_id": 78,
"text": "The death watch beetle, Xestobium rufovillosum (Ptinidae), is a serious pest of older wooden buildings in Europe. It attacks hardwoods such as oak and chestnut, always where some fungal decay has taken or is taking place. The actual introduction of the pest into buildings is thought to take place at the time of construction.",
"title": "Relationship to humans"
},
{
"paragraph_id": 79,
"text": "Other pests include the coconut hispine beetle, Brontispa longissima, which feeds on young leaves, seedlings and mature coconut trees, causing serious economic damage in the Philippines. The mountain pine beetle is a destructive pest of mature or weakened lodgepole pine, sometimes affecting large areas of Canada.",
"title": "Relationship to humans"
},
{
"paragraph_id": 80,
"text": "Beetles can be beneficial to human economics by controlling the populations of pests. The larvae and adults of some species of lady beetles (Coccinellidae) feed on aphids that are pests. Other lady beetles feed on scale insects, whitefly and mealybugs. If normal food sources are scarce, they may feed on small caterpillars, young plant bugs, or honeydew and nectar. Ground beetles (Carabidae) are common predators of many insect pests, including fly eggs, caterpillars, and wireworms. Ground beetles can help to control weeds by eating their seeds in the soil, reducing the need for herbicides to protect crops. The effectiveness of some species in reducing certain plant populations has resulted in the deliberate introduction of beetles in order to control weeds. For example, the genus Zygogramma is native to North America but has been used to control Parthenium hysterophorus in India and Ambrosia artemisiifolia in Russia.",
"title": "Relationship to humans"
},
{
"paragraph_id": 81,
"text": "Dung beetles (Scarabidae) have been successfully used to reduce the populations of pestilent flies, such as Musca vetustissima and Haematobia exigua which are serious pests of cattle in Australia. The beetles make the dung unavailable to breeding pests by quickly rolling and burying it in the soil, with the added effect of improving soil fertility, tilth, and nutrient cycling. The Australian Dung Beetle Project (1965–1985), introduced species of dung beetle to Australia from South Africa and Europe to reduce populations of Musca vetustissima, following successful trials of this technique in Hawaii. The American Institute of Biological Sciences reports that dung beetles save the United States cattle industry an estimated US$380 million annually through burying above-ground livestock feces.",
"title": "Relationship to humans"
},
{
"paragraph_id": 82,
"text": "The Dermestidae are often used in taxidermy and in the preparation of scientific specimens, to clean soft tissue from bones. Larvae feed on and remove cartilage along with other soft tissue.",
"title": "Relationship to humans"
},
{
"paragraph_id": 83,
"text": "Beetles are the most widely eaten insects, with about 344 species used as food, usually at the larval stage. The mealworm (the larva of the darkling beetle) and the rhinoceros beetle are among the species commonly eaten. A wide range of species is also used in folk medicine to treat those suffering from a variety of disorders and illnesses, though this is done without clinical studies supporting the efficacy of such treatments.",
"title": "Relationship to humans"
},
{
"paragraph_id": 84,
"text": "Due to their habitat specificity, many species of beetles have been suggested as suitable as indicators, their presence, numbers, or absence providing a measure of habitat quality. Predatory beetles such as the tiger beetles (Cicindelidae) have found scientific use as an indicator taxon for measuring regional patterns of biodiversity. They are suitable for this as their taxonomy is stable; their life history is well described; they are large and simple to observe when visiting a site; they occur around the world in many habitats, with species specialised to particular habitats; and their occurrence by species accurately indicates other species, both vertebrate and invertebrate. According to the habitats, many other groups such as the rove beetles in human-modified habitats, dung beetles in savannas and saproxylic beetles in forests have been suggested as potential indicator species.",
"title": "Relationship to humans"
},
{
"paragraph_id": 85,
"text": "Many beetles have durable elytra that has been used as material in art, with beetlewing the best example. Sometimes, they are incorporated into ritual objects for their religious significance. Whole beetles, either as-is or encased in clear plastic, are made into objects ranging from cheap souvenirs such as key chains to expensive fine-art jewellery. In parts of Mexico, beetles of the genus Zopherus are made into living brooches by attaching costume jewelry and golden chains, which is made possible by the incredibly hard elytra and sedentary habits of the genus.",
"title": "Relationship to humans"
},
{
"paragraph_id": 86,
"text": "Fighting beetles are used for entertainment and gambling. This sport exploits the territorial behavior and mating competition of certain species of large beetles. In the Chiang Mai district of northern Thailand, male Xylotrupes rhinoceros beetles are caught in the wild and trained for fighting. Females are held inside a log to stimulate the fighting males with their pheromones. These fights may be competitive and involve gambling both money and property. In South Korea the Dytiscidae species Cybister tripunctatus is used in a roulette-like game.",
"title": "Relationship to humans"
},
{
"paragraph_id": 87,
"text": "Beetles are sometimes used as instruments: the Onabasulu of Papua New Guinea historically used the \"hugu\" weevil Rhynchophorus ferrugineus as a musical instrument by letting the human mouth serve as a variable resonance chamber for the wing vibrations of the live adult beetle.",
"title": "Relationship to humans"
},
{
"paragraph_id": 88,
"text": "Some species of beetle are kept as pets, for example diving beetles (Dytiscidae) may be kept in a domestic fresh water tank.",
"title": "Relationship to humans"
},
{
"paragraph_id": 89,
"text": "In Japan the practice of keeping horned rhinoceros beetles (Dynastinae) and stag beetles (Lucanidae) is particularly popular amongst young boys. Such is the popularity in Japan that vending machines dispensing live beetles were developed in 1999, each holding up to 100 stag beetles.",
"title": "Relationship to humans"
},
{
"paragraph_id": 90,
"text": "Beetle collecting became extremely popular in the Victorian era. The naturalist Alfred Russel Wallace collected (by his own count) a total of 83,200 beetles during the eight years described in his 1869 book The Malay Archipelago, including 2,000 species new to science.",
"title": "Relationship to humans"
},
{
"paragraph_id": 91,
"text": "Several coleopteran adaptations have attracted interest in biomimetics with possible commercial applications. The bombardier beetle's powerful repellent spray has inspired the development of a fine mist spray technology, claimed to have a low carbon impact compared to aerosol sprays. Moisture harvesting behavior by the Namib desert beetle (Stenocara gracilipes) has inspired a self-filling water bottle which utilises hydrophilic and hydrophobic materials to benefit people living in dry regions with no regular rainfall.",
"title": "Relationship to humans"
},
{
"paragraph_id": 92,
"text": "Living beetles have been used as cyborgs. A Defense Advanced Research Projects Agency funded project implanted electrodes into Mecynorhina torquata beetles, allowing them to be remotely controlled via a radio receiver held on its back, as proof-of-concept for surveillance work. Similar technology has been applied to enable a human operator to control the free-flight steering and walking gaits of Mecynorhina torquata as well as graded turning and backward walking of Zophobas morio.",
"title": "Relationship to humans"
},
{
"paragraph_id": 93,
"text": "Research published in 2020 sought to create a robotic camera backpack for beetles. Miniature cameras weighing 248 mg were attached to live beetles of the Tenebrionid genera Asbolus and Eleodes. The cameras filmed over a 60° range for up to 6 hours.",
"title": "Relationship to humans"
},
{
"paragraph_id": 94,
"text": "Since beetles form such a large part of the world's biodiversity, their conservation is important, and equally, loss of habitat and biodiversity is essentially certain to impact on beetles. Many species of beetles have very specific habitats and long life cycles that make them vulnerable. Some species are highly threatened while others are already feared extinct. Island species tend to be more susceptible as in the case of Helictopleurus undatus of Madagascar which is thought to have gone extinct during the late 20th century. Conservationists have attempted to arouse a liking for beetles with flagship species like the stag beetle, Lucanus cervus, and tiger beetles (Cicindelidae). In Japan the Genji firefly, Luciola cruciata, is extremely popular, and in South Africa the Addo elephant dung beetle offers promise for broadening ecotourism beyond the big five tourist mammal species. Popular dislike of pest beetles, too, can be turned into public interest in insects, as can unusual ecological adaptations of species like the fairy shrimp hunting beetle, Cicinis bruchi.",
"title": "Relationship to humans"
}
] | Beetles are insects that form the order Coleoptera, in the superorder Holometabola. Their front pair of wings are hardened into wing-cases, elytra, distinguishing them from most other insects. The Coleoptera, with about 400,000 described species, is the largest of all orders, constituting almost 40% of described insects and 25% of all known animal species; new species are discovered frequently, with estimates suggesting that there are between 0.9 and 2.1 million total species. Found in almost every habitat except the sea and the polar regions, they interact with their ecosystems in several ways: beetles often feed on plants and fungi, break down animal and plant debris, and eat other invertebrates. Some species are serious agricultural pests, such as the Colorado potato beetle, while others such as Coccinellidae eat aphids, scale insects, thrips, and other plant-sucking insects that damage crops. Beetles typically have a particularly hard exoskeleton including the elytra, though some such as the rove beetles have very short elytra while blister beetles have softer elytra. The general anatomy of a beetle is quite uniform and typical of insects, although there are several examples of novelty, such as adaptations in water beetles which trap air bubbles under the elytra for use while diving. Beetles are holometabolans, which means that they undergo complete metamorphosis, with a series of conspicuous and relatively abrupt changes in body structure between hatching and becoming adult after a relatively immobile pupal stage. Some, such as stag beetles, have a marked sexual dimorphism, the males possessing enormously enlarged mandibles which they use to fight other males. Many beetles are aposematic, with bright colors and patterns warning of their toxicity, while others are harmless Batesian mimics of such insects. Many beetles, including those that live in sandy places, have effective camouflage. Beetles are prominent in human culture, from the sacred scarabs of ancient Egypt to beetlewing art and use as pets or fighting insects for entertainment and gambling. Many beetle groups are brightly and attractively colored making them objects of collection and decorative displays. Over 300 species are used as food, mostly as larvae; species widely consumed include mealworms and rhinoceros beetle larvae. However, the major impact of beetles on human life is as agricultural, forestry, and horticultural pests. Serious pests include the boll weevil of cotton, the Colorado potato beetle, the coconut hispine beetle, and the mountain pine beetle. Most beetles, however, do not cause economic damage and many, such as the lady beetles and dung beetles are beneficial by helping to control insect pests. | 2001-11-08T14:06:00Z | 2023-12-28T16:38:10Z | [
"Template:Other uses",
"Template:Cite web",
"Template:Refend",
"Template:Main",
"Template:Redirect",
"Template:Use mdy dates",
"Template:IPAc-en",
"Template:Convert",
"Template:As of",
"Template:Efn",
"Template:Citation needed",
"Template:See also",
"Template:Cite book",
"Template:In lang",
"Template:Ma",
"Template:OEtymD",
"Template:Citation",
"Template:Refbegin",
"Template:Orders of Insects",
"Template:Short description",
"Template:Automatic taxobox",
"Template:Clade",
"Template:Hiero",
"Template:Cite news",
"Template:Wikispecies",
"Template:Authority control",
"Template:As written",
"Template:Insects in culture",
"Template:Coleoptera",
"Template:Redirect-distinguish",
"Template:Good article",
"Template:Gaps",
"Template:Notelist",
"Template:Cite journal",
"Template:Cite magazine",
"Template:Cbignore",
"Template:Further",
"Template:Wikibooks",
"Template:Reflist",
"Template:Cite AV media",
"Template:Taxonbar"
] | https://en.wikipedia.org/wiki/Beetle |
7,045 | Concorde | Concorde (/ˈkɒŋkɔːrd/) is a retired Franco-British supersonic airliner jointly developed and manufactured by Sud Aviation (later Aérospatiale) and the British Aircraft Corporation (BAC). Studies started in 1954, and France and the UK signed a treaty establishing the development project on 29 November 1962, as the programme cost was estimated at £70 million (£1.39 billion in 2021). Construction of the six prototypes began in February 1965, and the first flight took off from Toulouse on 2 March 1969. The market was predicted for 350 aircraft, and the manufacturers received up to 100 option orders from many major airlines. On 9 October 1975, it received its French Certificate of Airworthiness, and from the UK CAA on 5 December.
Concorde is a tailless aircraft design with a narrow fuselage permitting 4-abreast seating for 92 to 128 passengers, an ogival delta wing and a droop nose for landing visibility. It is powered by four Rolls-Royce/Snecma Olympus 593 turbojets with variable engine intake ramps, and reheat for take-off and acceleration to supersonic speed. Constructed out of aluminium, it was the first airliner to have analogue fly-by-wire flight controls. The airliner could maintain a supercruise up to Mach 2.04 (2,170 km/h; 1,350 mph) at an altitude of 60,000 ft (18.3 km).
Delays and cost overruns increased the programme cost to £1.5–2.1 billion in 1976, (£9–13.2 billion in 2021). Concorde entered service on 21 January of that year with Air France from Paris-Roissy and British Airways from London Heathrow. Transatlantic flights were the main market, to Washington Dulles from 24 May, and to New York JFK from 17 October 1977. Air France and British Airways remained the sole customers with seven airframes each, for a total production of twenty. Supersonic flight more than halved travel times, but sonic booms over the ground limited it to transoceanic flights only.
Its only competitor was the Tupolev Tu-144, carrying passengers from November 1977 until a May 1978 crash, while a potential competitor, the Boeing 2707, was cancelled in 1971 before any prototypes were built.
On 25 July 2000, Air France Flight 4590 crashed shortly after take-off with all 109 occupants and four on the ground killed. This was the only fatal incident involving Concorde; commercial service was suspended until November 2001. The Concorde aircraft were retired in 2003, 27 years after commercial operations had begun. Most of the aircraft remain on display in Europe and America.
The origins of the Concorde project date to the early 1950s, when Arnold Hall, director of the Royal Aircraft Establishment (RAE), asked Morien Morgan to form a committee to study the supersonic transport (SST) concept. The group met for the first time in February 1954 and delivered their first report in April 1955. At the time it was known that the drag at supersonic speeds was strongly related to the span of the wing. This led to the use of short-span, thin trapezoidal wings such as those seen on the control surfaces of many missiles, or in aircraft such as the Lockheed F-104 Starfighter interceptor or the planned Avro 730 strategic bomber that the team studied. The team outlined a baseline configuration that resembled an enlarged Avro 730.
This same short span produced very little lift at low speed, which resulted in extremely long take-off runs and high landing speeds. In an SST design, this would have required enormous engine power to lift off from existing runways and, to provide the fuel needed, "some horribly large aeroplanes" resulted. Based on this, the group considered the concept of an SST infeasible, and instead suggested continued low-level studies into supersonic aerodynamics.
Soon after, Johanna Weber and Dietrich Küchemann at the RAE published a series of reports on a new wing planform, known in the UK as the "slender delta" concept. The team, including Eric Maskell whose report "Flow Separation in Three Dimensions" contributed to an understanding of the physical nature of separated flow, worked with the fact that delta wings can produce strong vortices on their upper surfaces at high angles of attack. The vortex will lower the air pressure and cause lift to be greatly increased. This effect had been noticed earlier, notably by Chuck Yeager in the Convair XF-92, but its qualities had not been fully appreciated. Weber suggested that this was no mere curiosity, and the effect could be used deliberately to improve low speed performance.
Küchemann's and Weber's papers changed the entire nature of supersonic design almost overnight. Although the delta had already been used on aircraft prior to this point, these designs used planforms that were not much different from a swept wing of the same span. Weber noted that the lift from the vortex was increased by the length of the wing it had to operate over, which suggested that the effect would be maximised by extending the wing along the fuselage as far as possible. Such a layout would still have good supersonic performance inherent to the short span, while also offering reasonable take-off and landing speeds using vortex generation. The only downside to such a design is that the aircraft would have to take off and land very "nose high" to generate the required vortex lift, which led to questions about the low speed handling qualities of such a design. It would also need to have long landing gear to produce the required angle of attack while still on the runway.
Küchemann presented the idea at a meeting where Morgan was also present. Test pilot Eric Brown recalls Morgan's reaction to the presentation, saying that he immediately seized on it as the solution to the SST problem. Brown considers this moment as being the true birth of the Concorde project.
On 1 October 1956 the Ministry of Supply asked Morgan to form a new study group, the Supersonic Transport Aircraft Committee (STAC) (sometimes referred to as the Supersonic Transport Advisory Committee), with the explicit goal of developing a practical SST design and finding industry partners to build it. At the first meeting, on 5 November 1956, the decision was made to fund the development of a test bed aircraft to examine the low-speed performance of the slender delta, a contract that eventually produced the Handley Page HP.115. This aircraft would ultimately demonstrate safe control at speeds as low as 69 mph (111 km/h), about 1/3 that of the F-104 Starfighter.
STAC stated that an SST would have economic performance similar to existing subsonic types. A significant problem is that lift is not generated the same way at supersonic and subsonic speeds, with the lift-to-drag ratio for supersonic designs being about half that of subsonic designs. This means the aircraft would have to use more power than a subsonic design of the same size. But although they would burn more fuel in cruise, they would be able to fly more sorties in a given period of time, so fewer aircraft would be needed to service a particular route. This would remain economically advantageous as long as fuel represented a small percentage of operational costs, as it did at the time.
STAC suggested that two designs naturally fell out of their work, a transatlantic model flying at about Mach 2, and a shorter-range version flying at Mach 1.2 perhaps. Morgan suggested that a 150-passenger transatlantic SST would cost about £75 to £90 million to develop, and be in service in 1970. The smaller 100 passenger short-range version would cost perhaps £50 to £80 million, and be ready for service in 1968. To meet this schedule, development would need to begin in 1960, with production contracts let in 1962. Morgan strongly suggested that the US was already involved in a similar project, and that if the UK failed to respond it would be locked out of an airliner market that he believed would be dominated by SST aircraft.
In 1959, a study contract was awarded to Hawker Siddeley and Bristol for preliminary designs based on the slender delta concept, which developed as the HSA.1000 and Bristol 198. Armstrong Whitworth also responded with an internal design, the M-Wing, for the lower-speed shorter-range category. Even at this early time, both the STAC group and the government were looking for partners to develop the designs. In September 1959, Hawker approached Lockheed, and after the creation of British Aircraft Corporation in 1960, the former Bristol team immediately started talks with Boeing, General Dynamics, Douglas Aircraft, and Sud Aviation.
Küchemann and others at the RAE continued their work on the slender delta throughout this period, considering three basic shapes; the classic straight-edge delta, the "gothic delta" that was rounded outward to appear like a gothic arch, and the "ogival wing" that was compound-rounded into the shape of an ogee. Each of these planforms had its own advantages and disadvantages in terms of aerodynamics. As they worked with these shapes, a practical concern grew to become so important that it forced selection of one of these designs.
Generally one wants to have the wing's centre of pressure (CP, or "lift point") close to the aircraft's centre of gravity (CG, or "balance point") to reduce the amount of control force required to pitch the aircraft. As the aircraft layout changes during the design phase, it is common for the CG to move fore or aft. With a normal wing design this can be addressed by moving the wing slightly fore or aft to account for this. With a delta wing running most of the length of the fuselage, this was no longer easy; moving the wing would leave it in front of the nose or behind the tail. Studying the various layouts in terms of CG changes, both during design and changes due to fuel use during flight, the ogee planform immediately came to the fore.
While the wing planform was evolving, so was the basic SST concept. Bristol's original Type 198 was a small design with an almost pure slender delta wing, but evolved into the larger Type 223.
To test the new wing, NASA privately assisted the team by modifying a Douglas F5D Skylancer with temporary wing modifications to mimic the wing selection. In 1965 the NASA test aircraft successfully tested the wing, and found that it reduced landing speeds noticeably over the standard delta wing. NASA Ames test center also ran simulations that showed the aircraft would suffer a sudden change in pitch when entering ground effect. Ames test pilots later participated in a joint cooperative test with the French and British test pilots and found that the simulations had been correct, and this information was added to pilot training.
By this time similar political and economic concerns in France had led to their own SST plans. In the late 1950s, the government requested designs from both the government-owned Sud Aviation and Nord Aviation, as well as Dassault. All three returned designs based on Küchemann and Weber's slender delta; Nord suggested a ramjet powered design flying at Mach 3, and the other two were jet-powered Mach 2 designs that were similar to each other. Of the three, the Sud Aviation Super-Caravelle won the design contest with a medium-range design deliberately sized to avoid competition with transatlantic US designs they assumed were already on the drawing board.
As soon as the design was complete, in April 1960, Pierre Satre, the company's technical director, was sent to Bristol to discuss a partnership. Bristol was surprised to find that the Sud team had designed a similar aircraft after considering the SST problem and coming to the very same conclusions as the Bristol and STAC teams in terms of economics. It was later revealed that the original STAC report, marked "For UK Eyes Only", had secretly been passed to France to win political favour. Sud made minor changes to the paper and presented it as their own work.
Unsurprisingly, the two teams found much to agree on. France had no modern large jet engines and had already concluded they would buy a British design anyway (as they had on the earlier subsonic Caravelle). As neither company had experience in the use of high-heat metals for airframes, a maximum speed of around Mach 2 was selected so aluminium could be used – above this speed, the friction with the air warms the metal so much that aluminium begins to soften. This lower speed would also speed development and allow their design to fly before the Americans. Finally, everyone involved agreed that Küchemann's ogee-shaped wing was the right one.
The only disagreements were over the size and range. The British team was still focused on a 150-passenger design serving transatlantic routes, while France was deliberately avoiding these. However, this proved not to be the barrier it might seem; common components could be used in both designs, with the shorter range version using a clipped fuselage and four engines, and the longer one with a stretched fuselage and six engines, leaving only the wing to be extensively re-designed. The teams continued to meet through 1961, and by this time it was clear that the two aircraft would be considerably more similar in spite of different ranges and seating arrangements. A single design emerged that differed mainly in fuel load. More powerful Bristol Siddeley Olympus engines, being developed for the TSR-2, allowed either design to be powered by only four engines.
While the development teams met, the French Minister of Public Works and Transport Robert Buron was meeting with the UK Minister of Aviation Peter Thorneycroft, and Thorneycroft soon revealed to the cabinet that France was much more serious about a partnership than any of the US companies. The various US companies had proved uninterested in such a venture, likely due to the belief that the government would be funding development and would frown on any partnership with a European company, and the risk of "giving away" US technological leadership to a European partner.
When the STAC plans were presented to the UK cabinet, a negative reaction resulted. The economic considerations were considered highly questionable, especially as these were based on development costs, now estimated to be £150 million (equivalent to £3.09 billion or US$3.94 billion in 2019), which were repeatedly overrun in the industry. The Treasury Ministry in particular presented a very negative view, suggesting that there was no way the project would have any positive financial returns for the government, especially in light that "the industry's past record of over-optimistic estimating (including the recent history of the TSR.2) suggests that it would be prudent to consider" the cost "to turn out much too low."
This concern led to an independent review of the project by the Committee on Civil Scientific Research and Development, which met on the topic between July and September 1962. The committee ultimately rejected the economic arguments, including considerations of supporting the industry made by Thorneycroft. Their report in October stated that it was unlikely there would be any direct positive economic outcome, but that the project should still be considered for the simple reason that everyone else was going supersonic, and they were concerned they would be locked out of future markets. Conversely, it appeared the project would not be likely to significantly affect other, more important, research efforts.
After considerable argument, the decision to proceed ultimately fell to an unlikely political expediency. At the time, the UK was pressing for admission to the European Economic Community, and this became the main rationale for moving ahead with the aircraft. The development project was negotiated as an international treaty between the two countries rather than a commercial agreement between companies and included a clause, originally asked for by the UK government, imposing heavy penalties for cancellation. This treaty was signed on 29 November 1962. Charles de Gaulle would soon veto the UK's entry into the European Community in a speech on 25 January 1963.
It was at Charles de Gaulle's January 1963 press conference that the aircraft was first called 'Concorde'. The name was suggested by the eighteen-year-old son of F.G. Clark, the publicity manager at BAC's Filton plant. Reflecting the treaty between the British and French governments that led to Concorde's construction, the name Concorde is from the French word concorde (IPA: [kɔ̃kɔʁd]), which has an English equivalent, concord. Both words mean agreement, harmony, or union. The name was officially changed to Concord by Harold Macmillan in response to a perceived slight by Charles de Gaulle. At the French roll-out in Toulouse in late 1967, the British Government Minister of Technology, Tony Benn, announced that he would change the spelling back to Concorde. This created a nationalist uproar that died down when Benn stated that the suffixed "e" represented "Excellence, England, Europe, and Entente (Cordiale)". In his memoirs, he recounted a tale of a letter from an irate Scotsman claiming, "you talk about 'E' for England, but part of it is made in Scotland." Given Scotland's contribution of providing the nose cone for the aircraft, Benn replied, "it was also 'E' for 'Écosse' (the French name for Scotland) – and I might have added 'e' for extravagance and 'e' for escalation as well!"
Concorde also acquired an unusual nomenclature for an aircraft. In common usage in the United Kingdom, the type is known as "Concorde" without an article, rather than "the Concorde" or "a Concorde".
Described by Flight International as an "aviation icon" and "one of aerospace's most ambitious but commercially flawed projects", Concorde failed to meet its original sales targets, despite initial interest from several airlines.
At first, the new consortium intended to produce one long-range and one short-range version. However, prospective customers showed no interest in the short-range version, which was dropped.
A two-page advertisement for Concorde ran in the 29 May 1967 issue of Aviation Week & Space Technology which predicted a market for 350 aircraft by 1980 and boasted of Concorde's head start over the United States' SST project.
Concorde had considerable difficulties that led to its dismal sales performance. Costs had spiralled during development to more than six times the original projections, arriving at a unit cost of £23 million in 1977 (equivalent to £152.02 million in 2021). Its sonic boom made travelling supersonically over land impossible without causing complaints from citizens. World events had also dampened Concorde sales prospects; the 1973–74 stock market crash and the 1973 oil crisis had made many airlines cautious about aircraft with high fuel consumption rates, and new wide-body aircraft, such as the Boeing 747, had recently made subsonic aircraft significantly more efficient and presented a low-risk option for airlines. While carrying a full load, Concorde achieved 15.8 passenger miles per gallon of fuel, while the Boeing 707 reached 33.3 pm/g, the Boeing 747 46.4 pm/g, and the McDonnell Douglas DC-10 53.6 pm/g. An emerging trend in the industry in favour of cheaper airline tickets has also caused airlines such as Qantas to question Concorde's market suitability.
The consortium received orders, i.e., non-binding options, for more than 100 of the long-range versions from the major airlines of the day: Pan Am, BOAC, and Air France were the launch customers, with six Concordes each. Other airlines in the order book included Panair do Brasil, Continental Airlines, Japan Airlines, Lufthansa, American Airlines, United Airlines, Air India, Air Canada, Braniff, Singapore Airlines, Iran Air, Olympic Airways, Qantas, CAAC Airlines, Middle East Airlines, and TWA. At the time of the first flight the options list contained 74 options from 16 airlines:
The design work was supported by a preceding research programme studying the flight characteristics of low ratio delta wings. A supersonic Fairey Delta 2 was modified to carry the ogee planform, and, renamed as the BAC 221, used for flight tests of the high-speed flight envelope; the Handley Page HP.115 also provided valuable information on low-speed performance.
Construction of two prototypes began in February 1965: 001, built by Aérospatiale at Toulouse, and 002, by BAC at Filton, Bristol. Concorde 001 made its first test flight from Toulouse on 2 March 1969, piloted by André Turcat, and first went supersonic on 1 October. The first UK-built Concorde flew from Filton to RAF Fairford on 9 April 1969, piloted by Brian Trubshaw. Both prototypes were presented to the public for the first time on 7–8 June 1969 at the Paris Air Show. As the flight programme progressed, 001 embarked on a sales and demonstration tour on 4 September 1971, which was also the first transatlantic crossing of Concorde. Concorde 002 followed suit on 2 June 1972 with a tour of the Middle and Far East. Concorde 002 made the first visit to the United States in 1973, landing at the new Dallas/Fort Worth Regional Airport to mark that airport's opening.
While Concorde had initially held a great deal of customer interest, the project was hit by a large number of order cancellations. The Paris Le Bourget air show crash of the competing Soviet Tupolev Tu-144 had shocked potential buyers, and public concern over the environmental issues presented by a supersonic aircraft—the sonic boom, take-off noise and pollution—had produced a shift in public opinion of SSTs. By 1976 the remaining buyers were from four countries: Britain, France, China, and Iran. Only Air France and British Airways (the successor to BOAC) took up their orders, with the two governments taking a cut of any profits made.
The United States government cut federal funding for the Boeing 2707, its rival supersonic transport programme, in 1971; Boeing did not complete its two 2707 prototypes. The US, India, and Malaysia all ruled out Concorde supersonic flights over the noise concern, although some of these restrictions were later relaxed. Professor Douglas Ross characterised restrictions placed upon Concorde operations by President Jimmy Carter's administration as having been an act of protectionism of American aircraft manufacturers.
The original programme cost estimate was £70 million before 1962, (£1.39 billion in 2019). The programme experienced huge cost overruns and delays, and the programme eventually cost between £1.5 and £2.1 billion in 1976, (£9.44 billion-13.2 billion in 2019). This extreme cost was the main reason the production run was much smaller than expected. The per-unit cost was impossible to recoup, so the French and British governments absorbed the development costs.
Concorde is an ogival delta winged aircraft with four Olympus engines based on those employed in the RAF's Avro Vulcan strategic bomber. It is one of the few commercial aircraft to employ a tailless design (the Tupolev Tu-144 being another). Concorde was the first airliner to have a (in this case, analogue) fly-by-wire flight-control system; the avionics system Concorde used was unique because it was the first commercial aircraft to employ hybrid circuits. The principal designer for the project was Pierre Satre, with Sir Archibald Russell as his deputy.
Concorde pioneered the following technologies:
For high speed and optimisation of flight:
For weight-saving and enhanced performance:
A symposium titled "Supersonic-Transport Implications" was hosted by the Royal Aeronautical Society on 8 December 1960. Various views were put forward on the likely type of powerplant for a supersonic transport, such as podded or buried installation and turbojet or ducted-fan engines. Boundary layer management in the podded installation was put forward as simpler with only an inlet cone but Dr. Seddon of the RAE saw "a future in a more sophisticated integration of shapes" in a buried installation. Another concern highlighted the case with two or more engines situated behind a single intake. An intake failure could lead to a double or triple engine failure. The advantage of the ducted fan over the turbojet was reduced airport noise but with considerable economic penalties with its larger cross-section producing excessive drag. At that time it was considered that the noise from a turbojet optimised for supersonic cruise could be reduced to an acceptable level using noise suppressors as used on subsonic jets.
The powerplant configuration selected for Concorde, and its development to a certificated design, can be seen in light of the above symposium topics (which highlighted airfield noise, boundary layer management and interactions between adjacent engines) and the requirement that the powerplant, at Mach 2, tolerate combinations of pushovers, sideslips, pull-ups and throttle slamming without surging. Extensive development testing with design changes and changes to intake and engine control laws would address most of the issues except airfield noise and the interaction between adjacent powerplants at speeds above Mach 1.6 which meant Concorde "had to be certified aerodynamically as a twin-engined aircraft above Mach 1.6".
Rolls-Royce had a design proposal, the RB.169, for the aircraft at the time of Concorde's initial design but "to develop a brand-new engine for Concorde would have been prohibitively expensive" so an existing engine, already flying in the supersonic BAC TSR-2 strike bomber prototype, was chosen. It was the BSEL Olympus Mk 320 turbojet, a development of the Bristol engine first used for the subsonic Avro Vulcan bomber.
Great confidence was placed in being able to reduce the noise of a turbojet and massive strides by SNECMA in silencer design were reported during the programme. However, by 1974 the spade silencers which projected into the exhaust were reported to be ineffective but "entry-into-service aircraft are likely to meet their noise guarantees". The Olympus Mk.622 with reduced jet velocity was proposed to reduce the noise but it was not developed.
Situated behind the leading edge of the wing, the engine intake had a wing boundary layer ahead of it. Two-thirds were diverted and the remaining third which entered the intake did not adversely affect the intake efficiency except during pushovers when the boundary layer thickened ahead of the intake and caused surging. Extensive wind tunnel testing helped define leading-edge modifications ahead of the intakes which solved the problem.
Each engine had its own intake and the engine nacelles were paired with a splitter plate between them to minimise adverse behaviour of one powerplant influencing the other. Only above Mach 1.6 (1,960 km/h; 1,220 mph) was an engine surge likely to affect the adjacent engine.
Concorde needed to fly long distances to be economically viable; this required high efficiency from the powerplant. Turbofan engines were rejected due to their larger cross-section producing excessive drag and therefore, their unsuitability for supersonic speeds. Olympus turbojet technology was available to be developed to meet the design requirements of the aircraft, although turbofans would be studied for any future SST.
The aircraft used reheat (afterburners) only at take-off and to pass through the upper transonic regime to supersonic speeds, between Mach 0.95 and 1.7. Reheat was switched off at all other times. Due to jet engines being highly inefficient at low speeds, Concorde burned two tonnes (4,400 lb) of fuel (almost 2% of the maximum fuel load) taxiing to the runway. Fuel used was Jet A-1. Due to the high thrust produced even with the engines at idle, only the two outer engines were run after landing for easier taxiing and less brake pad wear – at low weights after landing, the aircraft would not remain stationary with all four engines idling requiring the brakes to be continuously applied to prevent the aircraft from rolling.
The air intake design for Concorde's engines was especially critical. The intakes had to slow down supersonic inlet air to subsonic speeds with high-pressure recovery to ensure efficient operation at cruising speed while providing low distortion levels (to prevent engine surge) and maintaining high efficiency for all likely ambient temperatures to be met in cruise. They had to provide adequate subsonic performance for diversion cruise and low engine-face distortion at take-off. They also had to provide an alternative path for excess intake of air during engine throttling or shutdowns. The variable intake features required to meet all these requirements consisted of front and rear ramps, a dump door, an auxiliary inlet and a ramp bleed to the exhaust nozzle.
As well as supplying air to the engine, the intake also supplied air through the ramp bleed to the propelling nozzle. The nozzle ejector (or aerodynamic) design, with variable exit area and secondary flow from the intake, contributed to good expansion efficiency from take-off to cruise.
Concorde's Air Intake Control Units (AICUs) made use of a digital processor to provide the necessary accuracy for intake control. It was the world's first use of a digital processor to be given full authority control of an essential system in a passenger aircraft. It was developed by the Electronics and Space Systems (ESS) division of the British Aircraft Corporation after it became clear that the analogue AICUs fitted to the prototype aircraft and developed by Ultra Electronics were found to be insufficiently accurate for the tasks in hand.
Engine failure causes problems on conventional subsonic aircraft; not only does the aircraft lose thrust on that side but the engine creates drag, causing the aircraft to yaw and bank in the direction of the failed engine. If this had happened to Concorde at supersonic speeds, it theoretically could have caused a catastrophic failure of the airframe. Although computer simulations predicted considerable problems, in practice Concorde could shut down both engines on the same side of the aircraft at Mach 2 without the predicted difficulties. During an engine failure the required air intake is virtually zero. So, on Concorde, engine failure was countered by the opening of the auxiliary spill door and the full extension of the ramps, which deflected the air downwards past the engine, gaining lift and minimising drag. Concorde pilots were routinely trained to handle double-engine failure.
Concorde's thrust-by-wire engine control system was developed by Ultra Electronics.
Air compression on the outer surfaces caused the cabin to heat up during flight. Every surface, such as windows and panels, was warm to the touch by the end of the flight. Besides engines, the hottest part of the structure of any supersonic aircraft is the nose, due to aerodynamic heating. The engineers used Hiduminium R.R. 58, an aluminium alloy, throughout the aircraft because of its familiarity, cost and ease of construction. The highest temperature that aluminium could sustain over the life of the aircraft was 127 °C (261 °F), which limited the top speed to Mach 2.02. Concorde went through two cycles of heating and cooling during a flight, first cooling down as it gained altitude, then heating up after going supersonic. The reverse happened when descending and slowing down. This had to be factored into the metallurgical and fatigue modelling. A test rig was built that repeatedly heated up a full-size section of the wing, and then cooled it, and periodically samples of metal were taken for testing. The Concorde airframe was designed for a life of 45,000 flying hours.
Owing to air compression in front of the plane as it travelled at supersonic speed, the fuselage heated up and expanded by as much as 300 mm (12 in). The most obvious manifestation of this was a gap that opened up on the flight deck between the flight engineer's console and the bulkhead. On some aircraft that conducted a retiring supersonic flight, the flight engineers placed their caps in this expanded gap, wedging the cap when the airframe shrank again. To keep the cabin cool, Concorde used the fuel as a heat sink for the heat from the air conditioning. The same method also cooled the hydraulics. During supersonic flight the surfaces forward from the cockpit became heated, and a visor was used to deflect much of this heat from directly reaching the cockpit.
Concorde had livery restrictions; the majority of the surface had to be covered with a highly reflective white paint to avoid overheating the aluminium structure due to heating effects from supersonic flight at Mach 2. The white finish reduced the skin temperature by 6 to 11 °C (11 to 20 °F). In 1996, Air France briefly painted F-BTSD in a predominantly blue livery, with the exception of the wings, in a promotional deal with Pepsi. In this paint scheme, Air France was advised to remain at Mach 2 (2,120 km/h; 1,320 mph) for no more than 20 minutes at a time, but there was no restriction at speeds under Mach 1.7. F-BTSD was used because it was not scheduled for any long flights that required extended Mach 2 operations.
Due to its high speeds, large forces were applied to the aircraft during banks and turns, causing twisting and distortion of the aircraft's structure. In addition, there were concerns over maintaining precise control at supersonic speeds. Both of these issues were resolved by active ratio changes between the inboard and outboard elevons, varying at differing speeds including supersonic. Only the innermost elevons, which are attached to the stiffest area of the wings, were active at high speed. Additionally, the narrow fuselage meant that the aircraft flexed. This was visible from the rear passengers' viewpoints.
When any aircraft passes the critical mach of that particular airframe, the centre of pressure shifts rearwards. This causes a pitch-down moment on the aircraft if the centre of gravity remains where it was. The engineers designed the wings in a specific manner to reduce this shift, but there was still a shift of about 2 metres (6 ft 7 in). This could have been countered by the use of trim controls, but at such high speeds, this would have dramatically increased drag. Instead, the distribution of fuel along the aircraft was shifted during acceleration and deceleration to move the centre of gravity, effectively acting as an auxiliary trim control.
To fly non-stop across the Atlantic Ocean, Concorde required the greatest supersonic range of any aircraft. This was achieved by a combination of engines which were highly efficient at supersonic speeds, a slender fuselage with high fineness ratio, and a complex wing shape for a high lift-to-drag ratio. This also required carrying only a modest payload and a high fuel capacity, and the aircraft was trimmed to avoid unnecessary drag.
Nevertheless, soon after Concorde began flying, a Concorde "B" model was designed with slightly larger fuel capacity and slightly larger wings with leading edge slats to improve aerodynamic performance at all speeds, with the objective of expanding the range to reach markets in new regions. It featured more powerful engines with sound deadening and without the fuel-hungry and noisy afterburner. It was speculated that it was reasonably possible to create an engine with up to 25% gain in efficiency over the Rolls-Royce/Snecma Olympus 593. This would have given 500 mi (805 km) additional range and a greater payload, making new commercial routes possible. This was cancelled due in part to poor sales of Concorde, but also to the rising cost of aviation fuel in the 1970s.
Concorde's high cruising altitude meant people on board received almost twice the flux of extraterrestrial ionising radiation as those travelling on a conventional long-haul flight. Upon Concorde's introduction, it was speculated that this exposure during supersonic travels would increase the likelihood of skin cancer. Due to the proportionally reduced flight time, the overall equivalent dose would normally be less than a conventional flight over the same distance. Unusual solar activity might lead to an increase in incident radiation. To prevent incidents of excessive radiation exposure, the flight deck had a radiometer and an instrument to measure the rate of increase or decrease of radiation. If the radiation level became too high, Concorde would descend below 47,000 feet (14,000 m).
Airliner cabins were usually maintained at a pressure equivalent to 6,000–8,000 feet (1,800–2,400 m) elevation. Concorde's pressurisation was set to an altitude at the lower end of this range, 6,000 feet (1,800 m). Concorde's maximum cruising altitude was 60,000 feet (18,000 m); subsonic airliners typically cruise below 44,000 feet (13,000 m).
A sudden reduction in cabin pressure is hazardous to all passengers and crew. Above 50,000 feet (15,000 m), a sudden cabin depressurisation would leave a "time of useful consciousness" up to 10–15 seconds for a conditioned athlete. At Concorde's altitude, the air density is very low; a breach of cabin integrity would result in a loss of pressure severe enough that the plastic emergency oxygen masks installed on other passenger jets would not be effective and passengers would soon suffer from hypoxia despite quickly donning them. Concorde was equipped with smaller windows to reduce the rate of loss in the event of a breach, a reserve air supply system to augment cabin air pressure, and a rapid descent procedure to bring the aircraft to a safe altitude. The FAA enforces minimum emergency descent rates for aircraft and noting Concorde's higher operating altitude, concluded that the best response to pressure loss would be a rapid descent. Continuous positive airway pressure would have delivered pressurised oxygen directly to the pilots through masks.
While subsonic commercial jets took eight hours to fly from Paris to New York (seven hours from New York to Paris), the average supersonic flight time on the transatlantic routes was just under 3.5 hours. Concorde had a maximum cruising altitude of 18,300 metres (60,000 ft) and an average cruise speed of Mach 2.02 (2,150 km/h; 1,330 mph), more than twice the speed of conventional aircraft.
With no other civil traffic operating at its cruising altitude of about 56,000 ft (17,000 m), Concorde had exclusive use of dedicated oceanic airways, or "tracks", separate from the North Atlantic Tracks, the routes used by other aircraft to cross the Atlantic. Due to the significantly less variable nature of high altitude winds compared to those at standard cruising altitudes, these dedicated SST tracks had fixed co-ordinates, unlike the standard routes at lower altitudes, whose co-ordinates are replotted twice daily based on forecast weather patterns (jetstreams). Concorde would also be cleared in a 15,000-foot (4,570 m) block, allowing for a slow climb from 45,000 to 60,000 ft (14,000 to 18,000 m) during the oceanic crossing as the fuel load gradually decreased. In regular service, Concorde employed an efficient cruise-climb flight profile following take-off.
The delta-shaped wings required Concorde to adopt a higher angle of attack at low speeds than conventional aircraft, but it allowed the formation of large low-pressure vortices over the entire upper wing surface, maintaining lift. The normal landing speed was 170 miles per hour (274 km/h). Because of this high angle, during a landing approach Concorde was on the backside of the drag force curve, where raising the nose would increase the rate of descent; the aircraft was thus largely flown on the throttle and was fitted with an autothrottle to reduce the pilot's workload.
The only thing that tells you that you're moving is that occasionally when you're flying over the subsonic aeroplanes you can see all these 747s 20,000 feet below you almost appearing to go backwards, I mean you are going 800 miles an hour or thereabouts faster than they are. The aeroplane was an absolute delight to fly, it handled beautifully. And remember we are talking about an aeroplane that was being designed in the late 1950s – mid-1960s. I think it's absolutely amazing and here we are, now in the 21st century, and it remains unique.
Because of the way Concorde's delta-wing generated lift, the undercarriage had to be unusually strong and tall to allow for the angle of attack at low speed. At rotation, Concorde would rise to a high angle of attack, about 18 degrees. Prior to rotation, the wing generated almost no lift, unlike typical aircraft wings. Combined with the high airspeed at rotation (199 knots or 369 kilometres per hour or 229 miles per hour indicated airspeed), this increased the stresses on the main undercarriage in a way that was initially unexpected during the development and required a major redesign. Due to the high angle needed at rotation, a small set of wheels was added aft to prevent tailstrikes. The main undercarriage units swing towards each other to be stowed but due to their great height also needed to contract in length telescopically before swinging to clear each other when stowed.
The four main wheel tyres on each bogie unit are inflated to 232 psi (1,600 kPa). The twin-wheel nose undercarriage retracts forwards and its tyres are inflated to a pressure of 191 psi (1,320 kPa), and the wheel assembly carries a spray deflector to prevent standing water from being thrown up into the engine intakes. The tyres are rated to a maximum speed on the runway of 250 mph (400 km/h). The starboard nose wheel carries a single disc brake to halt wheel rotation during retraction of the undercarriage. The port nose wheel carries speed generators for the anti-skid braking system which prevents brake activation until the nose and main wheels rotate at the same rate.
Additionally, due to the high average take-off speed of 250 miles per hour (400 km/h), Concorde needed upgraded brakes. Like most airliners, Concorde has anti-skid braking – a system which prevents the tyres from losing traction when the brakes are applied for greater control during roll-out. The brakes, developed by Dunlop, were the first carbon-based brakes used on an airliner. The use of carbon over equivalent steel brakes provided a weight-saving of 1,200 lb (540 kg). Each wheel has multiple discs which are cooled by electric fans. Wheel sensors include brake overload, brake temperature, and tyre deflation. After a typical landing at Heathrow, brake temperatures were around 300–400 °C (570–750 °F). Landing Concorde required a minimum of 6,000 feet (1,800 m) runway length, this in fact being considerably less than the shortest runway Concorde ever actually landed on carrying commercial passengers, that of Cardiff Airport. Concorde G-AXDN (101), however, made its final landing at Duxford Aerodrome on 20 August 1977, which had a runway length of just 6,000 feet (1,800 m) at the time. This was the final aircraft to land at Duxford before the runway was shortened later that year.
Concorde's drooping nose, developed by Marshall's of Cambridge, enabled the aircraft to switch from being streamlined to reduce drag and achieve optimal aerodynamic efficiency during flight, to not obstructing the pilot's view during taxi, take-off, and landing operations. Due to the high angle of attack, the long pointed nose obstructed the view and necessitated the ability to droop. The droop nose was accompanied by a moving visor that retracted into the nose prior to being lowered. When the nose was raised to horizontal, the visor would rise in front of the cockpit windscreen for aerodynamic streamlining.
A controller in the cockpit allowed the visor to be retracted and the nose to be lowered to 5° below the standard horizontal position for taxiing and take-off. Following take-off and after clearing the airport, the nose and visor were raised. Prior to landing, the visor was again retracted and the nose lowered to 12.5° below horizontal for maximal visibility. Upon landing the nose was raised to the 5° position to avoid the possibility of damage due to collision with ground vehicles, and then raised fully before engine shutdown to prevent pooling of internal condensation within the radome seeping down into the aircraft's pitot/ADC system probes.
The US Federal Aviation Administration had objected to the restrictive visibility of the visor used on the first two prototype Concordes, which had been designed before a suitable high-temperature window glass had become available, and thus requiring alteration before the FAA would permit Concorde to serve US airports. This led to the redesigned visor used in the production and the four pre-production aircraft (101, 102, 201, and 202). The nose window and visor glass, needed to endure temperatures in excess of 100 °C (210 °F) at supersonic flight, were developed by Triplex.
Concorde 001 was modified with rooftop portholes for use on the 1973 Solar Eclipse mission and equipped with observation instruments. It performed the longest observation of a solar eclipse to date, about 74 minutes.
Scheduled flights began on 21 January 1976 on the London–Bahrain and Paris–Rio de Janeiro (via Dakar) routes, with BA flights using the Speedbird Concorde call sign to notify air traffic control of the aircraft's unique abilities and restrictions, but the French using their normal call signs. The Paris-Caracas route (via Azores) began on 10 April. The US Congress had just banned Concorde landings in the US, mainly due to citizen protest over sonic booms, preventing launch on the coveted North Atlantic routes. The US Secretary of Transportation, William Coleman, gave permission for Concorde service to Dulles International Airport, and Air France and British Airways simultaneously began a thrice-weekly service to Dulles on 24 May 1976. Due to low demand, Air France cancelled its Washington service in October 1982, while British Airways cancelled it in November 1994.
When the US ban on JFK Concorde operations was lifted in February 1977, New York banned Concorde locally. The ban came to an end on 17 October 1977 when the Supreme Court of the United States declined to overturn a lower court's ruling rejecting efforts by the Port Authority of New York and New Jersey and a grass-roots campaign led by Carol Berman to continue the ban. In spite of complaints about noise, the noise report noted that Air Force One, at the time a Boeing VC-137, was louder than Concorde at subsonic speeds and during take-off and landing. Scheduled service from Paris and London to New York's John F. Kennedy Airport began on 22 November 1977.
In December 1977, British Airways and Singapore Airlines started sharing a Concorde for flights between London and Singapore International Airport at Paya Lebar via Bahrain. The aircraft, BA's Concorde G-BOAD, was painted in Singapore Airlines livery on the port side and British Airways livery on the starboard side. The service was discontinued after three return flights because of noise complaints from the Malaysian government; it could only be reinstated on a new route bypassing Malaysian airspace in 1979. A dispute with India prevented Concorde from reaching supersonic speeds in Indian airspace, so the route was eventually declared not viable and discontinued in 1980.
During the Mexican oil boom, Air France flew Concorde twice weekly to Mexico City's Benito Juárez International Airport via Washington, DC, or New York City, from September 1978 to November 1982. The worldwide economic crisis during that period resulted in this route's cancellation; the last flights were almost empty. The routing between Washington or New York and Mexico City included a deceleration, from Mach 2.02 to Mach 0.95, to cross Florida subsonically and avoid creating a sonic boom over the state; Concorde then re-accelerated back to high speed while crossing the Gulf of Mexico. On 1 April 1989, on an around-the-world luxury tour charter, British Airways implemented changes to this routing that allowed G-BOAF to maintain Mach 2.02 by passing around Florida to the east and south. Periodically Concorde visited the region on similar chartered flights to Mexico City and Acapulco.
From December 1978 to May 1980, Braniff International Airways leased 11 Concordes, five from Air France and six from British Airways. These were used on subsonic flights between Dallas–Fort Worth and Dulles International Airport, flown by Braniff flight crews. Air France and British Airways crews then took over for the continuing supersonic flights to London and Paris. The aircraft were registered in both the United States and their home countries; the European registration was covered while being operated by Braniff, retaining full AF/BA liveries. The flights were not profitable and typically less than 50% booked, forcing Braniff to end its tenure as the only US Concorde operator in May 1980.
In its early years, the British Airways Concorde service had a greater number of "no-shows" (passengers who booked a flight and then failed to appear at the gate for boarding) than any other aircraft in the fleet.
Following the launch of British Airways Concorde services, Britain's other major airline, British Caledonian (BCal), set up a task force headed by Gordon Davidson, BA's former Concorde director, to investigate the possibility of their own Concorde operations. This was seen as particularly viable for the airline's long-haul network as there were two unsold aircraft then available for purchase.
One important reason for BCal's interest in Concorde was that the British Government's 1976 aviation policy review had opened the possibility of BA setting up supersonic services in competition with BCal's established sphere of influence. To counteract this potential threat, BCal considered their own independent Concorde plans, as well as a partnership with BA. BCal were considered most likely to have set up a Concorde service on the Gatwick–Lagos route, a major source of revenue and profits within BCal's scheduled route network; BCal's Concorde task force did assess the viability of a daily supersonic service complementing the existing subsonic widebody service on this route.
BCal entered into a bid to acquire at least one Concorde. However, BCal eventually arranged for two aircraft to be leased from BA and Aérospatiale respectively, to be maintained by either BA or Air France. BCal's envisaged two-Concorde fleet would have required a high level of aircraft usage to be cost-effective; therefore, BCal had decided to operate the second aircraft on a supersonic service between Gatwick and Atlanta, with a stopover at either Gander or Halifax. Consideration was given to services to Houston and various points on its South American network at a later stage. Both supersonic services were to be launched at some point during 1980; however, steeply rising oil prices caused by the 1979 energy crisis led to BCal shelving their supersonic ambitions.
By around 1981 in the UK, the future for Concorde looked bleak. The British government had lost money operating Concorde every year, and moves were afoot to cancel the service entirely. A cost projection came back with greatly reduced metallurgical testing costs because the test rig for the wings had built up enough data to last for 30 years and could be shut down. Despite this, the government was not keen to continue. In 1983, BA's managing director, Sir John King, convinced the government to sell the aircraft outright to the then state-owned British Airways for £16.5 million (equivalent to £46.32 million or US$59.12 million in 2019) plus the first year's profits. In 2003, Lord Heseltine, who was the minister responsible at the time, revealed to Alan Robb on BBC Radio 5 Live, that the aircraft had been sold for "next to nothing". Asked by Robb if it was the worst deal ever negotiated by a government minister, he replied "That is probably right. But if you have your hands tied behind your back and no cards and a very skillful negotiator on the other side of the table... I defy you to do any [better]." British Airways was subsequently privatised in 1987.
Its estimated operating costs were $3,800 per block hour in 1972 (equivalent to $26,585 in 2022), compared to actual 1971 operating costs of $1,835 for a 707 and $3,500 for a 747 (equivalent to $13,260 and $25,291, respectively); for a 3,050 nmi (5,650 km) London–New York sector, a 707 cost $13,750 or 3.04¢ per seat/nmi (in 1971 dollars), a 747 $26,200 or 2.4¢ per seat/nmi and Concorde $14,250 or 4.5¢ per seat/nmi.
In 1983, Pan Am accused the British Government of subsidising British Airways Concorde air fares, on which a return London–New York was £2,399 (£8,612 in 2021 prices), compared to £1,986 (£7,129) with a subsonic first class return, and London–Washington return was £2,426 (£8,709) instead of £2,258 (£8,106) subsonic.
Concorde's unit cost was then $33.8 million ($180 million in 2022 dollars). British Airways and Air France benefited from a significantly reduced purchase price from the manufacturing consortium via their respective governments.
The speed and premium service were relatively costly: in 1997, the round-trip ticket price from New York to London was $7,995 (equivalent to $14,600 in 2022), more than 30 times the cost of the least expensive scheduled flight for this route, however when compared with subsonic First Class on the same route, return tickets were only about 10-15% more expensive while flight time was cut in half.
After on and off profitability, in 1982 Concorde was established in its own operating division (Concorde Division) under Capt. Brian Walpole and Capt. Jock Lowe. Their research revealed that passengers thought that the fare was higher than it actually was, so the airline raised ticket prices to match these perceptions and, following the successful marketing research and repositioning, Concorde ran profitably for British Airways. The ticket price was pitched above subsonic First Class but not as much as might be expected. In 1996 the Concorde return fare was £4,772 compared to £4,314 for subsonic First Class, adding to its corporate appeal. It developed a loyal following and earned over half a billion pounds in profit over the next 20 years with (typically) just 5 aircraft operating and 2 in various maintenance cycles.
Between March 1984 and January 1991, British Airways flew a thrice-weekly Concorde service between London and Miami, stopping at Dulles International Airport. Until 2003, Air France and British Airways continued to operate the New York services daily. From 1987 to 2003 British Airways flew a Saturday morning Concorde service to Grantley Adams International Airport, Barbados, during the summer and winter holiday season.
Prior to the Air France Paris crash, several UK and French tour operators operated charter flights to European destinations on a regular basis; the charter business was viewed as lucrative by British Airways and Air France.
In 1997, British Airways held a promotional contest to mark the 10th anniversary of the airline's move into the private sector. The promotion was a lottery to fly to New York held for 190 tickets valued at £5,400 each, to be offered at £10. Contestants had to call a special hotline to compete with up to 20 million people.
On 10 April 2003, Air France and British Airways simultaneously announced they would retire Concorde later that year. They cited low passenger numbers following the 25 July 2000 crash, the slump in air travel following the September 11 attacks, and rising maintenance costs: Airbus, the company that acquired Aérospatiale in 2000, had made a decision in 2003 to no longer supply replacement parts for the aircraft. Although Concorde was technologically advanced when introduced in the 1970s, 30 years later, its analogue cockpit was outdated. There had been little commercial pressure to upgrade Concorde due to a lack of competing aircraft, unlike other airliners of the same era such as the Boeing 747. By its retirement, it was the last aircraft in the British Airways fleet that had a flight engineer; other aircraft, such as the modernised 747-400, had eliminated the role.
On 11 April 2003, Virgin Atlantic founder Sir Richard Branson announced that the company was interested in purchasing British Airways' Concorde fleet "for the same price that they were given them for – one pound". British Airways dismissed the idea, prompting Virgin to increase their offer to £1 million each. Branson claimed that when BA was privatised, a clause in the agreement required them to allow another British airline to operate Concorde if BA ceased to do so, but the Government denied the existence of such a clause. In October 2003, Branson wrote in The Economist that his final offer was "over £5 million" and that he had intended to operate the fleet "for many years to come". The chances for keeping Concorde in service were stifled by Airbus's lack of support for continued maintenance.
It has been suggested that Concorde was not withdrawn for the reasons usually given but that it became apparent during the grounding of Concorde that the airlines could make more profit carrying first-class passengers subsonically. A lack of commitment to Concorde from Director of Engineering Alan MacDonald was cited as having undermined BA's resolve to continue operating Concorde.
Other reasons why the attempted revival of Concorde never happened relate to the fact that the narrow fuselage did not allow for "luxury" features of subsonic air travel such as moving space, reclining seats and overall comfort. In the words of The Guardian's Dave Hall, "Concorde was an outdated notion of prestige that left sheer speed the only luxury of supersonic travel."
The general downturn in the commercial aviation industry after the September 11 attacks in 2001 and the end of maintenance support for Concorde by Airbus, the successor to Aérospatiale, contributed to the aircraft's retirement.
Air France made its final commercial Concorde landing in the United States in New York City from Paris on 30 May 2003. Air France's final Concorde flight took place on 27 June 2003 when F-BVFC retired to Toulouse.
An auction of Concorde parts and memorabilia for Air France was held at Christie's in Paris on 15 November 2003; 1,300 people attended, and several lots exceeded their predicted values. French Concorde F-BVFC was retired to Toulouse and kept functional for a short time after the end of service, in case taxi runs were required in support of the French judicial enquiry into the 2000 crash. The aircraft is now fully retired and no longer functional.
French Concorde F-BTSD has been retired to the "Musée de l'Air" at Paris–Le Bourget Airport near Paris; unlike the other museum Concordes, a few of the systems are being kept functional. For instance, the famous "droop nose" can still be lowered and raised. This led to rumours that they could be prepared for future flights for special occasions.
French Concorde F-BVFB is at the Auto & Technik Museum Sinsheim at Sinsheim, Germany, after its last flight from Paris to Baden-Baden, followed by transport to Sinsheim via barge and road. The museum also has a Tupolev Tu-144 on display – this is the only place where both supersonic airliners can be seen together.
In 1989, Air France signed a letter of agreement to donate a Concorde to the National Air and Space Museum in Washington D.C. upon the aircraft's retirement. On 12 June 2003, Air France honoured that agreement, donating Concorde F-BVFA (serial 205) to the museum upon the completion of its last flight. This aircraft was the first Air France Concorde to open service to Rio de Janeiro, Washington, D.C., and New York and had flown 17,824 hours. It is on display at the Smithsonian's Steven F. Udvar-Hazy Center at Dulles International Airport.
British Airways conducted a North American farewell tour in October 2003. G-BOAG visited Toronto Pearson International Airport on 1 October, after which it flew to New York's John F. Kennedy International Airport. G-BOAD visited Boston's Logan International Airport on 8 October, and G-BOAG visited Dulles International Airport on 14 October.
In a week of farewell flights around the United Kingdom, Concorde visited Birmingham on 20 October, Belfast on 21 October, Manchester on 22 October, Cardiff on 23 October, and Edinburgh on 24 October. Each day the aircraft made a return flight out and back into Heathrow to the cities, often overflying them at low altitude. On 22 October, both Concorde flight BA9021C, a special from Manchester, and BA002 from New York landed simultaneously on both of Heathrow's runways. On 23 October 2003, the Queen consented to the illumination of Windsor Castle, an honour reserved for state events and visiting dignitaries, as Concorde's last west-bound commercial flight departed London.
British Airways retired its Concorde fleet on 24 October 2003. G-BOAG left New York to a fanfare similar to that given for Air France's F-BTSD, while two more made round trips, G-BOAF over the Bay of Biscay, carrying VIP guests including former Concorde pilots, and G-BOAE to Edinburgh. The three aircraft then circled over London, having received special permission to fly at low altitude, before landing in sequence at Heathrow. The captain of the New York to London flight was Mike Bannister. The final flight of a Concorde in the US occurred on 5 November 2003 when G-BOAG flew from New York's JFK Airport to Seattle's Boeing Field to join the Museum of Flight's permanent collection. The plane was piloted by Mike Bannister and Les Broadie, who claimed a flight time of three hours, 55 minutes and 12 seconds, a record between the two cities that was made possible by Canada granting use of a supersonic corridor between Chibougamau, Quebec, and Peace River, Alberta. The museum had been pursuing a Concorde for their collection since 1984. The final flight of a Concorde worldwide took place on 26 November 2003 with a landing at Bristol Filton Airport.
All of BA's Concorde fleet have been grounded, drained of hydraulic fluid and their airworthiness certificates withdrawn. Jock Lowe, ex-chief Concorde pilot and manager of the fleet, estimated in 2004 that it would cost £10–15 million to make G-BOAF airworthy again. BA maintain ownership and have stated that they will not fly again due to a lack of support from Airbus. On 1 December 2003, Bonhams held an auction of British Airways Concorde artefacts, including a nose cone, at Kensington Olympia in London. Proceeds of around £750,000 were raised, with the majority going to charity. G-BOAD is currently on display at the Intrepid Sea, Air & Space Museum in New York. In 2007, BA announced that the advertising spot at Heathrow where a 40% scale model of Concorde was located would not be retained; the model is now on display at the Brooklands Museum, in Surrey, England.
Concorde G-BBDG was used for test flying and trials work. It was retired in 1981 and then only used for spares. It was dismantled and transported by road from Filton to the Brooklands Museum, where it was restored from essentially a shell. It remains open to visitors to the museum, and wears the original Negus & Negus livery worn by the Concorde fleet during their initial years of service with BA.
Concorde G-BOAB, call sign Alpha Bravo, was never modified and returned to service with the rest of British Airways' fleet, and has remained at London Heathrow Airport since its final flight, a ferry flight from JFK in 2000. Although the aircraft was effectively retired, G-BOAB was used as a test aircraft for the Project Rocket interiors that were in the process of being added to the rest of BA's fleet. G-BOAB has been towed around Heathrow on various occasions; it currently occupies a space on the airport's apron and is regularly visible to aircraft moving around the airport.
One of the youngest Concordes (F-BTSD) is on display at Le Bourget Air and Space Museum in Paris. In February 2010, it was announced that the museum and a group of volunteer Air France technicians intend to restore F-BTSD so it can taxi under its own power. In May 2010, it was reported that the British Save Concorde Group and French Olympus 593 groups had begun inspecting the engines of a Concorde at the French museum; their intent was to restore the airliner to a condition where it could fly in demonstrations.
G-BOAF forms the centrepiece of the Aerospace Bristol museum at Filton, which opened to the public in 2017.
G-BOAD, the aircraft that holds the record for the Heathrow – JFK crossing at 2 hours, 52 minutes, and 59 seconds, is on display at the Intrepid Sea, Air & Space Museum in New York.
F-BVFB is displayed at the Technik Museum Sinsheim in Germany alongside a Tu-144; this is the only instance of both supersonic passenger aircraft on display together.
On 25 July 2000, Air France Flight 4590, registration F-BTSC, crashed in Gonesse, France, after departing from Charles de Gaulle Airport en route to John F. Kennedy International Airport in New York City, killing all 100 passengers and nine crew members on board as well as four people on the ground. It was the only fatal accident involving Concorde. This crash also damaged Concorde's reputation and caused both British Airways and Air France to temporarily ground their fleets until modifications that involved strengthening the affected areas of the aircraft had been made.
According to the official investigation conducted by the Bureau of Enquiry and Analysis for Civil Aviation Safety (BEA), the crash was caused by a metallic strip that had fallen from a Continental Airlines DC-10 that had taken off minutes earlier. This fragment punctured a tyre on Concorde's left main wheel bogie during take-off. The tyre exploded, and a piece of rubber hit the fuel tank, which caused a fuel leak and led to a fire. The crew shut down engine number 2 in response to a fire warning, and with engine number 1 surging and producing little power, the aircraft was unable to gain altitude or speed. The aircraft entered a rapid pitch-up then a sudden descent, rolling left and crashing tail-low into the Hôtelissimo Les Relais Bleus Hotel in Gonesse.
The claim that a metallic strip caused the crash was disputed during the trial both by witnesses (including the pilot of then French President Jacques Chirac's aircraft that had just landed on an adjacent runway when Flight 4590 caught fire) and by an independent French TV investigation that found a wheel spacer had not been installed in the left-side main gear and that the plane caught fire some 1,000 feet from where the metallic strip lay. British investigators and former French Concorde pilots looked at several other possibilities that the BEA report ignored, including an unbalanced weight distribution in the fuel tanks and loose landing gear. They came to the conclusion that the Concorde veered off course on the runway, which reduced takeoff speed below the crucial minimum. John Hutchinson, who had served as a Concorde captain for 15 years with British Airways, said "the fire on its own should have been 'eminently survivable; the pilot should have been able to fly his way out of trouble'", had it not been for a "lethal combination of operational error and 'negligence' by the maintenance department of Air France" that "nobody wants to talk about".
However some of Hutchinson's claims are disputed and are directly contradicted by the Cockpit Voice Recording (CVR) (BEA Report pp. 46-48) and the Air France procedures manuals which differed from those of British Airways in several key but crucial aspects. The BEA accident report did explore in detail, the undercarriage bogey spacer issue and concluded that it did not contribute in any significant way to the accident. It also confirmed the titanium strip was the initiating cause by examining and matching it and the damage to the tyre.
On 6 December 2010, Continental Airlines and John Taylor, a mechanic who installed the metal strip, were found guilty of involuntary manslaughter; however, on 30 November 2012, a French court overturned the conviction, saying mistakes by Continental and Taylor did not make them criminally responsible.
Before the accident, Concorde had been arguably the safest operational passenger airliner in the world with zero passenger deaths-per-kilometres travelled; but there had been two prior non-fatal accidents and a rate of tyre damage some 30 times higher than subsonic airliners from 1995 to 2000. Safety improvements were made in the wake of the crash, including more secure electrical controls, Kevlar lining on the fuel tanks and specially developed burst-resistant tyres. The first flight with the modifications departed from London Heathrow on 17 July 2001, piloted by BA Chief Concorde Pilot Mike Bannister. During the 3-hour 20-minute flight over the mid-Atlantic towards Iceland, Bannister attained Mach 2.02 and 60,000 ft (18,000 m) before returning to RAF Brize Norton. The test flight, intended to resemble the London–New York route, was declared a success and was watched on live TV, and by crowds on the ground at both locations.
The first flight with passengers after the 2000 grounding for safety modifications landed shortly before the World Trade Center attacks in the United States. This was not a commercial flight: all the passengers were BA employees. Normal commercial operations resumed on 7 November 2001 by BA and AF (aircraft G-BOAE and F-BTSD), with service to New York JFK, where Mayor Rudy Giuliani greeted the passengers.
Concorde had suffered two previous non-fatal accidents that were similar to each other.
On 12 April 1989, Concorde G-BOAF, on a chartered flight from Christchurch, New Zealand, to Sydney, suffered a structural failure in-flight at supersonic speed. As the aircraft was climbing and accelerating through Mach 1.7, a "thud" was heard. The crew did not notice any handling problems, and they assumed the thud they heard was a minor engine surge. No further difficulty was encountered until descent through 40,000 feet (12,000 m) at Mach 1.3, when a vibration was felt throughout the aircraft, lasting two to three minutes. Most of the upper rudder had become separated from the aircraft at this point. Aircraft handling was unaffected, and the aircraft made a safe landing at Sydney. The UK's Air Accidents Investigation Branch (AAIB) concluded that the skin of the rudder had been separating from the rudder structure over a period of time before the accident due to moisture seepage past the rivets in the rudder. Furthermore, production staff had not followed proper procedures during an earlier modification of the rudder, but the procedures were difficult to adhere to. The aircraft was repaired and returned to service.
On 21 March 1992, G-BOAB while flying British Airways Flight 01 from London to New York, also suffered a structural failure in-flight at supersonic speed. While cruising at Mach 2, at approximately 53,000 feet (16,000 m) above mean sea level, the crew heard a "thump". No difficulties in handling were noticed, and no instruments gave any irregular indications. This crew also suspected there had been a minor engine surge. One hour later, during descent and while decelerating below Mach 1.4, a sudden "severe" vibration began throughout the aircraft. The vibration worsened when power was added to the No 2 engine, and it was attenuated when that engine's power was reduced. The crew shut down the No 2 engine and made a successful landing in New York, noting only that increased rudder control was needed to keep the aircraft on its intended approach course. Again, the skin had become separated from the structure of the rudder, which led to most of the upper rudder becoming separated in-flight. The AAIB concluded that repair materials had leaked into the structure of the rudder during a recent repair, weakening the bond between the skin and the structure of the rudder, leading to it breaking up in-flight. The large size of the repair had made it difficult to keep repair materials out of the structure, and prior to this accident, the severity of the effect of these repair materials on the structure and skin of the rudder was not appreciated.
The 2010 trial involving Continental Airlines over the crash of Flight 4590 established that from 1976 until Flight 4590 there had been 57 tyre failures involving Concordes during takeoffs, including a near-crash at Dulles International Airport on 14 June 1979 involving Air France Flight 54 where a tyre blowout pierced the plane's fuel tank and damaged the port-side engine and electrical cables, with the loss of two of the craft's hydraulic systems.
Of the 20 aircraft built, 18 remain, with 16 on display to the public.
Concorde was one of only two supersonic jetliner models to operate commercially; the other was the Soviet-built Tupolev Tu-144, which operated in the late 1970s. The Tu-144 was nicknamed "Concordski" by Western European journalists for its outward similarity to Concorde. It had been alleged that Soviet espionage efforts had resulted in the theft of Concorde blueprints, supposedly to assist in the design of the Tu-144. As a result of a rushed development programme, the first Tu-144 prototype was substantially different from the preproduction machines, but both were cruder than Concorde. The Tu-144S had a significantly shorter range than Concorde. Jean Rech, Sud Aviation, attributed this to two things, a very heavy powerplant with an intake twice as long as that on Concorde, and low-bypass turbofan engines with too-high a bypass ratio which needed afterburning for cruise. The aircraft had poor control at low speeds because of a simpler supersonic wing design. In addition the Tu-144 required braking parachutes to land while Concorde used anti-lock brakes. The Tu-144 had two crashes, one at the 1973 Paris Air Show, and another during a pre-delivery test flight in May 1978.
The later production Tu-144 versions were more refined and competitive. The Tu-144D had Kolesov RD-36-51 turbojet engines providing greater fuel efficiency, cruising speed and a maximum range of 6,500 km, near Concorde's maximum range of 6,667 km. Passenger service commenced in November 1977, but after the 1978 crash the aircraft was taken out of passenger service after only 55 flights, which carried an average of 58 passengers. The Tu-144 had an inherently unsafe structural design as a consequence of an automated production method chosen to simplify and speed up manufacturing. The Tu-144 program was cancelled by the Soviet government on 1 July 1983.
The main competing designs for the US government-funded SST were the swing-wing Boeing 2707 and the compound delta wing Lockheed L-2000. These were to have been larger, with seating for up to 300 people. The Boeing 2707 was selected for development. Concorde first flew in 1969, the year Boeing began building 2707 mockups after changing the design to a cropped delta wing; the cost of this and other changes helped to kill the project. The operation of US military aircraft such as the Mach 3+ North American XB-70 Valkyrie prototypes and Convair B-58 Hustler strategic nuclear bomber had shown that sonic booms were quite capable of reaching the ground, and the experience from the Oklahoma City sonic boom tests led to the same environmental concerns that hindered the commercial success of Concorde. The American government cancelled its SST project in 1971 having spent more than $1 billion without any aircraft being built.
Before Concorde's flight trials, developments in the civil aviation industry were largely accepted by governments and their respective electorates. Opposition to Concorde's noise, particularly on the east coast of the United States, forged a new political agenda on both sides of the Atlantic, with scientists and technology experts across a multitude of industries beginning to take the environmental and social impact more seriously. Although Concorde led directly to the introduction of a general noise abatement programme for aircraft flying out of John F. Kennedy Airport, many found that Concorde was quieter than expected, partly due to the pilots temporarily throttling back their engines to reduce noise during overflight of residential areas. Even before commercial flights started, it had been claimed that Concorde was quieter than many other aircraft. In 1971, BAC's technical director was quoted as saying, "It is certain on present evidence and calculations that in the airport context, production Concordes will be no worse than aircraft now in service and will in fact be better than many of them."
Concorde produced nitrogen oxides in its exhaust, which, despite complicated interactions with other ozone-depleting chemicals, are understood to result in degradation to the ozone layer at the stratospheric altitudes it cruised. It has been pointed out that other, lower-flying, airliners produce ozone during their flights in the troposphere, but vertical transit of gases between the layers is restricted. The small fleet meant overall ozone-layer degradation caused by Concorde was negligible. In 1995, David Fahey, of the National Oceanic and Atmospheric Administration in the United States, warned that a fleet of 500 supersonic aircraft with exhausts similar to Concorde might produce a 2 per cent drop in global ozone levels, much higher than previously thought. Each 1 per cent drop in ozone is estimated to increase the incidence of non-melanoma skin cancer worldwide by 2 per cent. Dr Fahey said if these particles are produced by highly oxidised sulphur in the fuel, as he believed, then removing sulphur in the fuel will reduce the ozone-destroying impact of supersonic transport.
Concorde's technical leap forward boosted the public's understanding of conflicts between technology and the environment as well as awareness of the complex decision analysis processes that surround such conflicts. In France, the use of acoustic fencing alongside TGV tracks might not have been achieved without the 1970s controversy over aircraft noise. In the UK, the CPRE has issued tranquillity maps since 1990.
Concorde was normally perceived as a privilege of the rich, but special circular or one-way (with return by other flight or ship) charter flights were arranged to bring a trip within the means of moderately well-off enthusiasts.
The aircraft was usually referred to by the British as simply "Concorde". In France it was known as "le Concorde" due to "le", the definite article, used in French grammar to introduce the name of a ship or aircraft, and the capital being used to distinguish a proper name from a common noun of the same spelling. In French, the common noun concorde means "agreement, harmony, or peace". Concorde's pilots and British Airways in official publications often refer to Concorde both in the singular and plural as "she" or "her".
As a symbol of national pride, an example from the BA fleet made occasional flypasts at selected Royal events, major air shows and other special occasions, sometimes in formation with the Red Arrows. On the final day of commercial service, public interest was so great that grandstands were erected at Heathrow Airport. Significant numbers of people attended the final landings; the event received widespread media coverage.
In 2006, 37 years after its first test flight, Concorde was announced the winner of the Great British Design Quest organised by the BBC (through The Culture Show) and the Design Museum. A total of 212,000 votes were cast with Concorde beating other British design icons such as the Mini, mini skirt, Jaguar E-Type car, the Tube map, the World Wide Web, the K2 red telephone box and the Supermarine Spitfire.
The heads of France and the United Kingdom flew in Concorde many times. Presidents Georges Pompidou, Valéry Giscard d'Estaing and François Mitterrand regularly used Concorde as French flagman aircraft in foreign visits. Queen Elizabeth II and Prime Ministers Edward Heath, Jim Callaghan, Margaret Thatcher, John Major and Tony Blair took Concorde in some charter flights such as the Queen's trips to Barbados on her Silver Jubilee in 1977, in 1987 and in 2003, to the Middle East in 1984 and to the United States in 1991. Pope John Paul II flew on Concorde in May 1989.
Concorde sometimes made special flights for demonstrations, air shows (such as the Farnborough, Paris-Le Bourget, Oshkosh AirVenture and MAKS air shows) as well as parades and celebrations (for example, of Zurich Airport's anniversary in 1998). The aircraft were also used for private charters (including by the President of Zaire Mobutu Sese Seko on multiple occasions), for advertising companies (including for the firm OKI), for Olympic torch relays (1992 Winter Olympics in Albertville) and for observing solar eclipses, including the solar eclipse of 30 June 1973 and again for the total solar eclipse on 11 August 1999.
The fastest transatlantic airliner flight was from New York JFK to London Heathrow on 7 February 1996 by the British Airways G-BOAD in 2 hours, 52 minutes, 59 seconds from take-off to touchdown aided by a 175 mph (282 km/h) tailwind. On 13 February 1985, a Concorde charter flight flew from London Heathrow to Sydney—on the opposite side of the world—in a time of 17 hours, 3 minutes and 45 seconds, including refuelling stops.
Concorde set the FAI "Westbound Around the World" and "Eastbound Around the World" world air speed records. On 12–13 October 1992, in commemoration of the 500th anniversary of Columbus' first voyage to the New World, Concorde Spirit Tours (US) chartered Air France Concorde F-BTSD and circumnavigated the world in 32 hours 49 minutes and 3 seconds, from Lisbon, Portugal, including six refuelling stops at Santo Domingo, Acapulco, Honolulu, Guam, Bangkok, and Bahrain.
The eastbound record was set by the same Air France Concorde (F-BTSD) under charter to Concorde Spirit Tours in the US on 15–16 August 1995. This promotional flight circumnavigated the world from New York/JFK International Airport in 31 hours 27 minutes 49 seconds, including six refuelling stops at Toulouse, Dubai, Bangkok, Andersen AFB in Guam, Honolulu, and Acapulco. By its 30th flight anniversary on 2 March 1999 Concorde had clocked up 920,000 flight hours, with more than 600,000 supersonic, many more than all of the other supersonic aircraft in the Western world combined.
On its way to the Museum of Flight in November 2003, G-BOAG set a New York City-to-Seattle speed record of 3 hours, 55 minutes, and 12 seconds. Due to the restrictions on supersonic overflights within the US the flight was granted permission by the Canadian authorities for the majority of the journey to be flown supersonically over sparsely-populated Canadian territory.
Data from The Wall Street Journal, The Concorde Story, The International Directory of Civil Aircraft, Aérospatiale/BAC Concorde 1969 onwards (all models)
General characteristics
Performance
Avionics | [
{
"paragraph_id": 0,
"text": "Concorde (/ˈkɒŋkɔːrd/) is a retired Franco-British supersonic airliner jointly developed and manufactured by Sud Aviation (later Aérospatiale) and the British Aircraft Corporation (BAC). Studies started in 1954, and France and the UK signed a treaty establishing the development project on 29 November 1962, as the programme cost was estimated at £70 million (£1.39 billion in 2021). Construction of the six prototypes began in February 1965, and the first flight took off from Toulouse on 2 March 1969. The market was predicted for 350 aircraft, and the manufacturers received up to 100 option orders from many major airlines. On 9 October 1975, it received its French Certificate of Airworthiness, and from the UK CAA on 5 December.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Concorde is a tailless aircraft design with a narrow fuselage permitting 4-abreast seating for 92 to 128 passengers, an ogival delta wing and a droop nose for landing visibility. It is powered by four Rolls-Royce/Snecma Olympus 593 turbojets with variable engine intake ramps, and reheat for take-off and acceleration to supersonic speed. Constructed out of aluminium, it was the first airliner to have analogue fly-by-wire flight controls. The airliner could maintain a supercruise up to Mach 2.04 (2,170 km/h; 1,350 mph) at an altitude of 60,000 ft (18.3 km).",
"title": ""
},
{
"paragraph_id": 2,
"text": "Delays and cost overruns increased the programme cost to £1.5–2.1 billion in 1976, (£9–13.2 billion in 2021). Concorde entered service on 21 January of that year with Air France from Paris-Roissy and British Airways from London Heathrow. Transatlantic flights were the main market, to Washington Dulles from 24 May, and to New York JFK from 17 October 1977. Air France and British Airways remained the sole customers with seven airframes each, for a total production of twenty. Supersonic flight more than halved travel times, but sonic booms over the ground limited it to transoceanic flights only.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Its only competitor was the Tupolev Tu-144, carrying passengers from November 1977 until a May 1978 crash, while a potential competitor, the Boeing 2707, was cancelled in 1971 before any prototypes were built.",
"title": ""
},
{
"paragraph_id": 4,
"text": "On 25 July 2000, Air France Flight 4590 crashed shortly after take-off with all 109 occupants and four on the ground killed. This was the only fatal incident involving Concorde; commercial service was suspended until November 2001. The Concorde aircraft were retired in 2003, 27 years after commercial operations had begun. Most of the aircraft remain on display in Europe and America.",
"title": ""
},
{
"paragraph_id": 5,
"text": "The origins of the Concorde project date to the early 1950s, when Arnold Hall, director of the Royal Aircraft Establishment (RAE), asked Morien Morgan to form a committee to study the supersonic transport (SST) concept. The group met for the first time in February 1954 and delivered their first report in April 1955. At the time it was known that the drag at supersonic speeds was strongly related to the span of the wing. This led to the use of short-span, thin trapezoidal wings such as those seen on the control surfaces of many missiles, or in aircraft such as the Lockheed F-104 Starfighter interceptor or the planned Avro 730 strategic bomber that the team studied. The team outlined a baseline configuration that resembled an enlarged Avro 730.",
"title": "Development"
},
{
"paragraph_id": 6,
"text": "This same short span produced very little lift at low speed, which resulted in extremely long take-off runs and high landing speeds. In an SST design, this would have required enormous engine power to lift off from existing runways and, to provide the fuel needed, \"some horribly large aeroplanes\" resulted. Based on this, the group considered the concept of an SST infeasible, and instead suggested continued low-level studies into supersonic aerodynamics.",
"title": "Development"
},
{
"paragraph_id": 7,
"text": "Soon after, Johanna Weber and Dietrich Küchemann at the RAE published a series of reports on a new wing planform, known in the UK as the \"slender delta\" concept. The team, including Eric Maskell whose report \"Flow Separation in Three Dimensions\" contributed to an understanding of the physical nature of separated flow, worked with the fact that delta wings can produce strong vortices on their upper surfaces at high angles of attack. The vortex will lower the air pressure and cause lift to be greatly increased. This effect had been noticed earlier, notably by Chuck Yeager in the Convair XF-92, but its qualities had not been fully appreciated. Weber suggested that this was no mere curiosity, and the effect could be used deliberately to improve low speed performance.",
"title": "Development"
},
{
"paragraph_id": 8,
"text": "Küchemann's and Weber's papers changed the entire nature of supersonic design almost overnight. Although the delta had already been used on aircraft prior to this point, these designs used planforms that were not much different from a swept wing of the same span. Weber noted that the lift from the vortex was increased by the length of the wing it had to operate over, which suggested that the effect would be maximised by extending the wing along the fuselage as far as possible. Such a layout would still have good supersonic performance inherent to the short span, while also offering reasonable take-off and landing speeds using vortex generation. The only downside to such a design is that the aircraft would have to take off and land very \"nose high\" to generate the required vortex lift, which led to questions about the low speed handling qualities of such a design. It would also need to have long landing gear to produce the required angle of attack while still on the runway.",
"title": "Development"
},
{
"paragraph_id": 9,
"text": "Küchemann presented the idea at a meeting where Morgan was also present. Test pilot Eric Brown recalls Morgan's reaction to the presentation, saying that he immediately seized on it as the solution to the SST problem. Brown considers this moment as being the true birth of the Concorde project.",
"title": "Development"
},
{
"paragraph_id": 10,
"text": "On 1 October 1956 the Ministry of Supply asked Morgan to form a new study group, the Supersonic Transport Aircraft Committee (STAC) (sometimes referred to as the Supersonic Transport Advisory Committee), with the explicit goal of developing a practical SST design and finding industry partners to build it. At the first meeting, on 5 November 1956, the decision was made to fund the development of a test bed aircraft to examine the low-speed performance of the slender delta, a contract that eventually produced the Handley Page HP.115. This aircraft would ultimately demonstrate safe control at speeds as low as 69 mph (111 km/h), about 1/3 that of the F-104 Starfighter.",
"title": "Development"
},
{
"paragraph_id": 11,
"text": "STAC stated that an SST would have economic performance similar to existing subsonic types. A significant problem is that lift is not generated the same way at supersonic and subsonic speeds, with the lift-to-drag ratio for supersonic designs being about half that of subsonic designs. This means the aircraft would have to use more power than a subsonic design of the same size. But although they would burn more fuel in cruise, they would be able to fly more sorties in a given period of time, so fewer aircraft would be needed to service a particular route. This would remain economically advantageous as long as fuel represented a small percentage of operational costs, as it did at the time.",
"title": "Development"
},
{
"paragraph_id": 12,
"text": "STAC suggested that two designs naturally fell out of their work, a transatlantic model flying at about Mach 2, and a shorter-range version flying at Mach 1.2 perhaps. Morgan suggested that a 150-passenger transatlantic SST would cost about £75 to £90 million to develop, and be in service in 1970. The smaller 100 passenger short-range version would cost perhaps £50 to £80 million, and be ready for service in 1968. To meet this schedule, development would need to begin in 1960, with production contracts let in 1962. Morgan strongly suggested that the US was already involved in a similar project, and that if the UK failed to respond it would be locked out of an airliner market that he believed would be dominated by SST aircraft.",
"title": "Development"
},
{
"paragraph_id": 13,
"text": "In 1959, a study contract was awarded to Hawker Siddeley and Bristol for preliminary designs based on the slender delta concept, which developed as the HSA.1000 and Bristol 198. Armstrong Whitworth also responded with an internal design, the M-Wing, for the lower-speed shorter-range category. Even at this early time, both the STAC group and the government were looking for partners to develop the designs. In September 1959, Hawker approached Lockheed, and after the creation of British Aircraft Corporation in 1960, the former Bristol team immediately started talks with Boeing, General Dynamics, Douglas Aircraft, and Sud Aviation.",
"title": "Development"
},
{
"paragraph_id": 14,
"text": "Küchemann and others at the RAE continued their work on the slender delta throughout this period, considering three basic shapes; the classic straight-edge delta, the \"gothic delta\" that was rounded outward to appear like a gothic arch, and the \"ogival wing\" that was compound-rounded into the shape of an ogee. Each of these planforms had its own advantages and disadvantages in terms of aerodynamics. As they worked with these shapes, a practical concern grew to become so important that it forced selection of one of these designs.",
"title": "Development"
},
{
"paragraph_id": 15,
"text": "Generally one wants to have the wing's centre of pressure (CP, or \"lift point\") close to the aircraft's centre of gravity (CG, or \"balance point\") to reduce the amount of control force required to pitch the aircraft. As the aircraft layout changes during the design phase, it is common for the CG to move fore or aft. With a normal wing design this can be addressed by moving the wing slightly fore or aft to account for this. With a delta wing running most of the length of the fuselage, this was no longer easy; moving the wing would leave it in front of the nose or behind the tail. Studying the various layouts in terms of CG changes, both during design and changes due to fuel use during flight, the ogee planform immediately came to the fore.",
"title": "Development"
},
{
"paragraph_id": 16,
"text": "While the wing planform was evolving, so was the basic SST concept. Bristol's original Type 198 was a small design with an almost pure slender delta wing, but evolved into the larger Type 223.",
"title": "Development"
},
{
"paragraph_id": 17,
"text": "To test the new wing, NASA privately assisted the team by modifying a Douglas F5D Skylancer with temporary wing modifications to mimic the wing selection. In 1965 the NASA test aircraft successfully tested the wing, and found that it reduced landing speeds noticeably over the standard delta wing. NASA Ames test center also ran simulations that showed the aircraft would suffer a sudden change in pitch when entering ground effect. Ames test pilots later participated in a joint cooperative test with the French and British test pilots and found that the simulations had been correct, and this information was added to pilot training.",
"title": "Development"
},
{
"paragraph_id": 18,
"text": "By this time similar political and economic concerns in France had led to their own SST plans. In the late 1950s, the government requested designs from both the government-owned Sud Aviation and Nord Aviation, as well as Dassault. All three returned designs based on Küchemann and Weber's slender delta; Nord suggested a ramjet powered design flying at Mach 3, and the other two were jet-powered Mach 2 designs that were similar to each other. Of the three, the Sud Aviation Super-Caravelle won the design contest with a medium-range design deliberately sized to avoid competition with transatlantic US designs they assumed were already on the drawing board.",
"title": "Development"
},
{
"paragraph_id": 19,
"text": "As soon as the design was complete, in April 1960, Pierre Satre, the company's technical director, was sent to Bristol to discuss a partnership. Bristol was surprised to find that the Sud team had designed a similar aircraft after considering the SST problem and coming to the very same conclusions as the Bristol and STAC teams in terms of economics. It was later revealed that the original STAC report, marked \"For UK Eyes Only\", had secretly been passed to France to win political favour. Sud made minor changes to the paper and presented it as their own work.",
"title": "Development"
},
{
"paragraph_id": 20,
"text": "Unsurprisingly, the two teams found much to agree on. France had no modern large jet engines and had already concluded they would buy a British design anyway (as they had on the earlier subsonic Caravelle). As neither company had experience in the use of high-heat metals for airframes, a maximum speed of around Mach 2 was selected so aluminium could be used – above this speed, the friction with the air warms the metal so much that aluminium begins to soften. This lower speed would also speed development and allow their design to fly before the Americans. Finally, everyone involved agreed that Küchemann's ogee-shaped wing was the right one.",
"title": "Development"
},
{
"paragraph_id": 21,
"text": "The only disagreements were over the size and range. The British team was still focused on a 150-passenger design serving transatlantic routes, while France was deliberately avoiding these. However, this proved not to be the barrier it might seem; common components could be used in both designs, with the shorter range version using a clipped fuselage and four engines, and the longer one with a stretched fuselage and six engines, leaving only the wing to be extensively re-designed. The teams continued to meet through 1961, and by this time it was clear that the two aircraft would be considerably more similar in spite of different ranges and seating arrangements. A single design emerged that differed mainly in fuel load. More powerful Bristol Siddeley Olympus engines, being developed for the TSR-2, allowed either design to be powered by only four engines.",
"title": "Development"
},
{
"paragraph_id": 22,
"text": "While the development teams met, the French Minister of Public Works and Transport Robert Buron was meeting with the UK Minister of Aviation Peter Thorneycroft, and Thorneycroft soon revealed to the cabinet that France was much more serious about a partnership than any of the US companies. The various US companies had proved uninterested in such a venture, likely due to the belief that the government would be funding development and would frown on any partnership with a European company, and the risk of \"giving away\" US technological leadership to a European partner.",
"title": "Development"
},
{
"paragraph_id": 23,
"text": "When the STAC plans were presented to the UK cabinet, a negative reaction resulted. The economic considerations were considered highly questionable, especially as these were based on development costs, now estimated to be £150 million (equivalent to £3.09 billion or US$3.94 billion in 2019), which were repeatedly overrun in the industry. The Treasury Ministry in particular presented a very negative view, suggesting that there was no way the project would have any positive financial returns for the government, especially in light that \"the industry's past record of over-optimistic estimating (including the recent history of the TSR.2) suggests that it would be prudent to consider\" the cost \"to turn out much too low.\"",
"title": "Development"
},
{
"paragraph_id": 24,
"text": "This concern led to an independent review of the project by the Committee on Civil Scientific Research and Development, which met on the topic between July and September 1962. The committee ultimately rejected the economic arguments, including considerations of supporting the industry made by Thorneycroft. Their report in October stated that it was unlikely there would be any direct positive economic outcome, but that the project should still be considered for the simple reason that everyone else was going supersonic, and they were concerned they would be locked out of future markets. Conversely, it appeared the project would not be likely to significantly affect other, more important, research efforts.",
"title": "Development"
},
{
"paragraph_id": 25,
"text": "After considerable argument, the decision to proceed ultimately fell to an unlikely political expediency. At the time, the UK was pressing for admission to the European Economic Community, and this became the main rationale for moving ahead with the aircraft. The development project was negotiated as an international treaty between the two countries rather than a commercial agreement between companies and included a clause, originally asked for by the UK government, imposing heavy penalties for cancellation. This treaty was signed on 29 November 1962. Charles de Gaulle would soon veto the UK's entry into the European Community in a speech on 25 January 1963.",
"title": "Development"
},
{
"paragraph_id": 26,
"text": "It was at Charles de Gaulle's January 1963 press conference that the aircraft was first called 'Concorde'. The name was suggested by the eighteen-year-old son of F.G. Clark, the publicity manager at BAC's Filton plant. Reflecting the treaty between the British and French governments that led to Concorde's construction, the name Concorde is from the French word concorde (IPA: [kɔ̃kɔʁd]), which has an English equivalent, concord. Both words mean agreement, harmony, or union. The name was officially changed to Concord by Harold Macmillan in response to a perceived slight by Charles de Gaulle. At the French roll-out in Toulouse in late 1967, the British Government Minister of Technology, Tony Benn, announced that he would change the spelling back to Concorde. This created a nationalist uproar that died down when Benn stated that the suffixed \"e\" represented \"Excellence, England, Europe, and Entente (Cordiale)\". In his memoirs, he recounted a tale of a letter from an irate Scotsman claiming, \"you talk about 'E' for England, but part of it is made in Scotland.\" Given Scotland's contribution of providing the nose cone for the aircraft, Benn replied, \"it was also 'E' for 'Écosse' (the French name for Scotland) – and I might have added 'e' for extravagance and 'e' for escalation as well!\"",
"title": "Development"
},
{
"paragraph_id": 27,
"text": "Concorde also acquired an unusual nomenclature for an aircraft. In common usage in the United Kingdom, the type is known as \"Concorde\" without an article, rather than \"the Concorde\" or \"a Concorde\".",
"title": "Development"
},
{
"paragraph_id": 28,
"text": "Described by Flight International as an \"aviation icon\" and \"one of aerospace's most ambitious but commercially flawed projects\", Concorde failed to meet its original sales targets, despite initial interest from several airlines.",
"title": "Development"
},
{
"paragraph_id": 29,
"text": "At first, the new consortium intended to produce one long-range and one short-range version. However, prospective customers showed no interest in the short-range version, which was dropped.",
"title": "Development"
},
{
"paragraph_id": 30,
"text": "A two-page advertisement for Concorde ran in the 29 May 1967 issue of Aviation Week & Space Technology which predicted a market for 350 aircraft by 1980 and boasted of Concorde's head start over the United States' SST project.",
"title": "Development"
},
{
"paragraph_id": 31,
"text": "Concorde had considerable difficulties that led to its dismal sales performance. Costs had spiralled during development to more than six times the original projections, arriving at a unit cost of £23 million in 1977 (equivalent to £152.02 million in 2021). Its sonic boom made travelling supersonically over land impossible without causing complaints from citizens. World events had also dampened Concorde sales prospects; the 1973–74 stock market crash and the 1973 oil crisis had made many airlines cautious about aircraft with high fuel consumption rates, and new wide-body aircraft, such as the Boeing 747, had recently made subsonic aircraft significantly more efficient and presented a low-risk option for airlines. While carrying a full load, Concorde achieved 15.8 passenger miles per gallon of fuel, while the Boeing 707 reached 33.3 pm/g, the Boeing 747 46.4 pm/g, and the McDonnell Douglas DC-10 53.6 pm/g. An emerging trend in the industry in favour of cheaper airline tickets has also caused airlines such as Qantas to question Concorde's market suitability.",
"title": "Development"
},
{
"paragraph_id": 32,
"text": "The consortium received orders, i.e., non-binding options, for more than 100 of the long-range versions from the major airlines of the day: Pan Am, BOAC, and Air France were the launch customers, with six Concordes each. Other airlines in the order book included Panair do Brasil, Continental Airlines, Japan Airlines, Lufthansa, American Airlines, United Airlines, Air India, Air Canada, Braniff, Singapore Airlines, Iran Air, Olympic Airways, Qantas, CAAC Airlines, Middle East Airlines, and TWA. At the time of the first flight the options list contained 74 options from 16 airlines:",
"title": "Development"
},
{
"paragraph_id": 33,
"text": "The design work was supported by a preceding research programme studying the flight characteristics of low ratio delta wings. A supersonic Fairey Delta 2 was modified to carry the ogee planform, and, renamed as the BAC 221, used for flight tests of the high-speed flight envelope; the Handley Page HP.115 also provided valuable information on low-speed performance.",
"title": "Development"
},
{
"paragraph_id": 34,
"text": "Construction of two prototypes began in February 1965: 001, built by Aérospatiale at Toulouse, and 002, by BAC at Filton, Bristol. Concorde 001 made its first test flight from Toulouse on 2 March 1969, piloted by André Turcat, and first went supersonic on 1 October. The first UK-built Concorde flew from Filton to RAF Fairford on 9 April 1969, piloted by Brian Trubshaw. Both prototypes were presented to the public for the first time on 7–8 June 1969 at the Paris Air Show. As the flight programme progressed, 001 embarked on a sales and demonstration tour on 4 September 1971, which was also the first transatlantic crossing of Concorde. Concorde 002 followed suit on 2 June 1972 with a tour of the Middle and Far East. Concorde 002 made the first visit to the United States in 1973, landing at the new Dallas/Fort Worth Regional Airport to mark that airport's opening.",
"title": "Development"
},
{
"paragraph_id": 35,
"text": "While Concorde had initially held a great deal of customer interest, the project was hit by a large number of order cancellations. The Paris Le Bourget air show crash of the competing Soviet Tupolev Tu-144 had shocked potential buyers, and public concern over the environmental issues presented by a supersonic aircraft—the sonic boom, take-off noise and pollution—had produced a shift in public opinion of SSTs. By 1976 the remaining buyers were from four countries: Britain, France, China, and Iran. Only Air France and British Airways (the successor to BOAC) took up their orders, with the two governments taking a cut of any profits made.",
"title": "Development"
},
{
"paragraph_id": 36,
"text": "The United States government cut federal funding for the Boeing 2707, its rival supersonic transport programme, in 1971; Boeing did not complete its two 2707 prototypes. The US, India, and Malaysia all ruled out Concorde supersonic flights over the noise concern, although some of these restrictions were later relaxed. Professor Douglas Ross characterised restrictions placed upon Concorde operations by President Jimmy Carter's administration as having been an act of protectionism of American aircraft manufacturers.",
"title": "Development"
},
{
"paragraph_id": 37,
"text": "The original programme cost estimate was £70 million before 1962, (£1.39 billion in 2019). The programme experienced huge cost overruns and delays, and the programme eventually cost between £1.5 and £2.1 billion in 1976, (£9.44 billion-13.2 billion in 2019). This extreme cost was the main reason the production run was much smaller than expected. The per-unit cost was impossible to recoup, so the French and British governments absorbed the development costs.",
"title": "Development"
},
{
"paragraph_id": 38,
"text": "Concorde is an ogival delta winged aircraft with four Olympus engines based on those employed in the RAF's Avro Vulcan strategic bomber. It is one of the few commercial aircraft to employ a tailless design (the Tupolev Tu-144 being another). Concorde was the first airliner to have a (in this case, analogue) fly-by-wire flight-control system; the avionics system Concorde used was unique because it was the first commercial aircraft to employ hybrid circuits. The principal designer for the project was Pierre Satre, with Sir Archibald Russell as his deputy.",
"title": "Design"
},
{
"paragraph_id": 39,
"text": "Concorde pioneered the following technologies:",
"title": "Design"
},
{
"paragraph_id": 40,
"text": "For high speed and optimisation of flight:",
"title": "Design"
},
{
"paragraph_id": 41,
"text": "For weight-saving and enhanced performance:",
"title": "Design"
},
{
"paragraph_id": 42,
"text": "A symposium titled \"Supersonic-Transport Implications\" was hosted by the Royal Aeronautical Society on 8 December 1960. Various views were put forward on the likely type of powerplant for a supersonic transport, such as podded or buried installation and turbojet or ducted-fan engines. Boundary layer management in the podded installation was put forward as simpler with only an inlet cone but Dr. Seddon of the RAE saw \"a future in a more sophisticated integration of shapes\" in a buried installation. Another concern highlighted the case with two or more engines situated behind a single intake. An intake failure could lead to a double or triple engine failure. The advantage of the ducted fan over the turbojet was reduced airport noise but with considerable economic penalties with its larger cross-section producing excessive drag. At that time it was considered that the noise from a turbojet optimised for supersonic cruise could be reduced to an acceptable level using noise suppressors as used on subsonic jets.",
"title": "Design"
},
{
"paragraph_id": 43,
"text": "The powerplant configuration selected for Concorde, and its development to a certificated design, can be seen in light of the above symposium topics (which highlighted airfield noise, boundary layer management and interactions between adjacent engines) and the requirement that the powerplant, at Mach 2, tolerate combinations of pushovers, sideslips, pull-ups and throttle slamming without surging. Extensive development testing with design changes and changes to intake and engine control laws would address most of the issues except airfield noise and the interaction between adjacent powerplants at speeds above Mach 1.6 which meant Concorde \"had to be certified aerodynamically as a twin-engined aircraft above Mach 1.6\".",
"title": "Design"
},
{
"paragraph_id": 44,
"text": "Rolls-Royce had a design proposal, the RB.169, for the aircraft at the time of Concorde's initial design but \"to develop a brand-new engine for Concorde would have been prohibitively expensive\" so an existing engine, already flying in the supersonic BAC TSR-2 strike bomber prototype, was chosen. It was the BSEL Olympus Mk 320 turbojet, a development of the Bristol engine first used for the subsonic Avro Vulcan bomber.",
"title": "Design"
},
{
"paragraph_id": 45,
"text": "Great confidence was placed in being able to reduce the noise of a turbojet and massive strides by SNECMA in silencer design were reported during the programme. However, by 1974 the spade silencers which projected into the exhaust were reported to be ineffective but \"entry-into-service aircraft are likely to meet their noise guarantees\". The Olympus Mk.622 with reduced jet velocity was proposed to reduce the noise but it was not developed.",
"title": "Design"
},
{
"paragraph_id": 46,
"text": "Situated behind the leading edge of the wing, the engine intake had a wing boundary layer ahead of it. Two-thirds were diverted and the remaining third which entered the intake did not adversely affect the intake efficiency except during pushovers when the boundary layer thickened ahead of the intake and caused surging. Extensive wind tunnel testing helped define leading-edge modifications ahead of the intakes which solved the problem.",
"title": "Design"
},
{
"paragraph_id": 47,
"text": "Each engine had its own intake and the engine nacelles were paired with a splitter plate between them to minimise adverse behaviour of one powerplant influencing the other. Only above Mach 1.6 (1,960 km/h; 1,220 mph) was an engine surge likely to affect the adjacent engine.",
"title": "Design"
},
{
"paragraph_id": 48,
"text": "Concorde needed to fly long distances to be economically viable; this required high efficiency from the powerplant. Turbofan engines were rejected due to their larger cross-section producing excessive drag and therefore, their unsuitability for supersonic speeds. Olympus turbojet technology was available to be developed to meet the design requirements of the aircraft, although turbofans would be studied for any future SST.",
"title": "Design"
},
{
"paragraph_id": 49,
"text": "The aircraft used reheat (afterburners) only at take-off and to pass through the upper transonic regime to supersonic speeds, between Mach 0.95 and 1.7. Reheat was switched off at all other times. Due to jet engines being highly inefficient at low speeds, Concorde burned two tonnes (4,400 lb) of fuel (almost 2% of the maximum fuel load) taxiing to the runway. Fuel used was Jet A-1. Due to the high thrust produced even with the engines at idle, only the two outer engines were run after landing for easier taxiing and less brake pad wear – at low weights after landing, the aircraft would not remain stationary with all four engines idling requiring the brakes to be continuously applied to prevent the aircraft from rolling.",
"title": "Design"
},
{
"paragraph_id": 50,
"text": "The air intake design for Concorde's engines was especially critical. The intakes had to slow down supersonic inlet air to subsonic speeds with high-pressure recovery to ensure efficient operation at cruising speed while providing low distortion levels (to prevent engine surge) and maintaining high efficiency for all likely ambient temperatures to be met in cruise. They had to provide adequate subsonic performance for diversion cruise and low engine-face distortion at take-off. They also had to provide an alternative path for excess intake of air during engine throttling or shutdowns. The variable intake features required to meet all these requirements consisted of front and rear ramps, a dump door, an auxiliary inlet and a ramp bleed to the exhaust nozzle.",
"title": "Design"
},
{
"paragraph_id": 51,
"text": "As well as supplying air to the engine, the intake also supplied air through the ramp bleed to the propelling nozzle. The nozzle ejector (or aerodynamic) design, with variable exit area and secondary flow from the intake, contributed to good expansion efficiency from take-off to cruise.",
"title": "Design"
},
{
"paragraph_id": 52,
"text": "Concorde's Air Intake Control Units (AICUs) made use of a digital processor to provide the necessary accuracy for intake control. It was the world's first use of a digital processor to be given full authority control of an essential system in a passenger aircraft. It was developed by the Electronics and Space Systems (ESS) division of the British Aircraft Corporation after it became clear that the analogue AICUs fitted to the prototype aircraft and developed by Ultra Electronics were found to be insufficiently accurate for the tasks in hand.",
"title": "Design"
},
{
"paragraph_id": 53,
"text": "Engine failure causes problems on conventional subsonic aircraft; not only does the aircraft lose thrust on that side but the engine creates drag, causing the aircraft to yaw and bank in the direction of the failed engine. If this had happened to Concorde at supersonic speeds, it theoretically could have caused a catastrophic failure of the airframe. Although computer simulations predicted considerable problems, in practice Concorde could shut down both engines on the same side of the aircraft at Mach 2 without the predicted difficulties. During an engine failure the required air intake is virtually zero. So, on Concorde, engine failure was countered by the opening of the auxiliary spill door and the full extension of the ramps, which deflected the air downwards past the engine, gaining lift and minimising drag. Concorde pilots were routinely trained to handle double-engine failure.",
"title": "Design"
},
{
"paragraph_id": 54,
"text": "Concorde's thrust-by-wire engine control system was developed by Ultra Electronics.",
"title": "Design"
},
{
"paragraph_id": 55,
"text": "Air compression on the outer surfaces caused the cabin to heat up during flight. Every surface, such as windows and panels, was warm to the touch by the end of the flight. Besides engines, the hottest part of the structure of any supersonic aircraft is the nose, due to aerodynamic heating. The engineers used Hiduminium R.R. 58, an aluminium alloy, throughout the aircraft because of its familiarity, cost and ease of construction. The highest temperature that aluminium could sustain over the life of the aircraft was 127 °C (261 °F), which limited the top speed to Mach 2.02. Concorde went through two cycles of heating and cooling during a flight, first cooling down as it gained altitude, then heating up after going supersonic. The reverse happened when descending and slowing down. This had to be factored into the metallurgical and fatigue modelling. A test rig was built that repeatedly heated up a full-size section of the wing, and then cooled it, and periodically samples of metal were taken for testing. The Concorde airframe was designed for a life of 45,000 flying hours.",
"title": "Design"
},
{
"paragraph_id": 56,
"text": "Owing to air compression in front of the plane as it travelled at supersonic speed, the fuselage heated up and expanded by as much as 300 mm (12 in). The most obvious manifestation of this was a gap that opened up on the flight deck between the flight engineer's console and the bulkhead. On some aircraft that conducted a retiring supersonic flight, the flight engineers placed their caps in this expanded gap, wedging the cap when the airframe shrank again. To keep the cabin cool, Concorde used the fuel as a heat sink for the heat from the air conditioning. The same method also cooled the hydraulics. During supersonic flight the surfaces forward from the cockpit became heated, and a visor was used to deflect much of this heat from directly reaching the cockpit.",
"title": "Design"
},
{
"paragraph_id": 57,
"text": "Concorde had livery restrictions; the majority of the surface had to be covered with a highly reflective white paint to avoid overheating the aluminium structure due to heating effects from supersonic flight at Mach 2. The white finish reduced the skin temperature by 6 to 11 °C (11 to 20 °F). In 1996, Air France briefly painted F-BTSD in a predominantly blue livery, with the exception of the wings, in a promotional deal with Pepsi. In this paint scheme, Air France was advised to remain at Mach 2 (2,120 km/h; 1,320 mph) for no more than 20 minutes at a time, but there was no restriction at speeds under Mach 1.7. F-BTSD was used because it was not scheduled for any long flights that required extended Mach 2 operations.",
"title": "Design"
},
{
"paragraph_id": 58,
"text": "Due to its high speeds, large forces were applied to the aircraft during banks and turns, causing twisting and distortion of the aircraft's structure. In addition, there were concerns over maintaining precise control at supersonic speeds. Both of these issues were resolved by active ratio changes between the inboard and outboard elevons, varying at differing speeds including supersonic. Only the innermost elevons, which are attached to the stiffest area of the wings, were active at high speed. Additionally, the narrow fuselage meant that the aircraft flexed. This was visible from the rear passengers' viewpoints.",
"title": "Design"
},
{
"paragraph_id": 59,
"text": "When any aircraft passes the critical mach of that particular airframe, the centre of pressure shifts rearwards. This causes a pitch-down moment on the aircraft if the centre of gravity remains where it was. The engineers designed the wings in a specific manner to reduce this shift, but there was still a shift of about 2 metres (6 ft 7 in). This could have been countered by the use of trim controls, but at such high speeds, this would have dramatically increased drag. Instead, the distribution of fuel along the aircraft was shifted during acceleration and deceleration to move the centre of gravity, effectively acting as an auxiliary trim control.",
"title": "Design"
},
{
"paragraph_id": 60,
"text": "To fly non-stop across the Atlantic Ocean, Concorde required the greatest supersonic range of any aircraft. This was achieved by a combination of engines which were highly efficient at supersonic speeds, a slender fuselage with high fineness ratio, and a complex wing shape for a high lift-to-drag ratio. This also required carrying only a modest payload and a high fuel capacity, and the aircraft was trimmed to avoid unnecessary drag.",
"title": "Design"
},
{
"paragraph_id": 61,
"text": "Nevertheless, soon after Concorde began flying, a Concorde \"B\" model was designed with slightly larger fuel capacity and slightly larger wings with leading edge slats to improve aerodynamic performance at all speeds, with the objective of expanding the range to reach markets in new regions. It featured more powerful engines with sound deadening and without the fuel-hungry and noisy afterburner. It was speculated that it was reasonably possible to create an engine with up to 25% gain in efficiency over the Rolls-Royce/Snecma Olympus 593. This would have given 500 mi (805 km) additional range and a greater payload, making new commercial routes possible. This was cancelled due in part to poor sales of Concorde, but also to the rising cost of aviation fuel in the 1970s.",
"title": "Design"
},
{
"paragraph_id": 62,
"text": "Concorde's high cruising altitude meant people on board received almost twice the flux of extraterrestrial ionising radiation as those travelling on a conventional long-haul flight. Upon Concorde's introduction, it was speculated that this exposure during supersonic travels would increase the likelihood of skin cancer. Due to the proportionally reduced flight time, the overall equivalent dose would normally be less than a conventional flight over the same distance. Unusual solar activity might lead to an increase in incident radiation. To prevent incidents of excessive radiation exposure, the flight deck had a radiometer and an instrument to measure the rate of increase or decrease of radiation. If the radiation level became too high, Concorde would descend below 47,000 feet (14,000 m).",
"title": "Design"
},
{
"paragraph_id": 63,
"text": "Airliner cabins were usually maintained at a pressure equivalent to 6,000–8,000 feet (1,800–2,400 m) elevation. Concorde's pressurisation was set to an altitude at the lower end of this range, 6,000 feet (1,800 m). Concorde's maximum cruising altitude was 60,000 feet (18,000 m); subsonic airliners typically cruise below 44,000 feet (13,000 m).",
"title": "Design"
},
{
"paragraph_id": 64,
"text": "A sudden reduction in cabin pressure is hazardous to all passengers and crew. Above 50,000 feet (15,000 m), a sudden cabin depressurisation would leave a \"time of useful consciousness\" up to 10–15 seconds for a conditioned athlete. At Concorde's altitude, the air density is very low; a breach of cabin integrity would result in a loss of pressure severe enough that the plastic emergency oxygen masks installed on other passenger jets would not be effective and passengers would soon suffer from hypoxia despite quickly donning them. Concorde was equipped with smaller windows to reduce the rate of loss in the event of a breach, a reserve air supply system to augment cabin air pressure, and a rapid descent procedure to bring the aircraft to a safe altitude. The FAA enforces minimum emergency descent rates for aircraft and noting Concorde's higher operating altitude, concluded that the best response to pressure loss would be a rapid descent. Continuous positive airway pressure would have delivered pressurised oxygen directly to the pilots through masks.",
"title": "Design"
},
{
"paragraph_id": 65,
"text": "While subsonic commercial jets took eight hours to fly from Paris to New York (seven hours from New York to Paris), the average supersonic flight time on the transatlantic routes was just under 3.5 hours. Concorde had a maximum cruising altitude of 18,300 metres (60,000 ft) and an average cruise speed of Mach 2.02 (2,150 km/h; 1,330 mph), more than twice the speed of conventional aircraft.",
"title": "Design"
},
{
"paragraph_id": 66,
"text": "With no other civil traffic operating at its cruising altitude of about 56,000 ft (17,000 m), Concorde had exclusive use of dedicated oceanic airways, or \"tracks\", separate from the North Atlantic Tracks, the routes used by other aircraft to cross the Atlantic. Due to the significantly less variable nature of high altitude winds compared to those at standard cruising altitudes, these dedicated SST tracks had fixed co-ordinates, unlike the standard routes at lower altitudes, whose co-ordinates are replotted twice daily based on forecast weather patterns (jetstreams). Concorde would also be cleared in a 15,000-foot (4,570 m) block, allowing for a slow climb from 45,000 to 60,000 ft (14,000 to 18,000 m) during the oceanic crossing as the fuel load gradually decreased. In regular service, Concorde employed an efficient cruise-climb flight profile following take-off.",
"title": "Design"
},
{
"paragraph_id": 67,
"text": "The delta-shaped wings required Concorde to adopt a higher angle of attack at low speeds than conventional aircraft, but it allowed the formation of large low-pressure vortices over the entire upper wing surface, maintaining lift. The normal landing speed was 170 miles per hour (274 km/h). Because of this high angle, during a landing approach Concorde was on the backside of the drag force curve, where raising the nose would increase the rate of descent; the aircraft was thus largely flown on the throttle and was fitted with an autothrottle to reduce the pilot's workload.",
"title": "Design"
},
{
"paragraph_id": 68,
"text": "The only thing that tells you that you're moving is that occasionally when you're flying over the subsonic aeroplanes you can see all these 747s 20,000 feet below you almost appearing to go backwards, I mean you are going 800 miles an hour or thereabouts faster than they are. The aeroplane was an absolute delight to fly, it handled beautifully. And remember we are talking about an aeroplane that was being designed in the late 1950s – mid-1960s. I think it's absolutely amazing and here we are, now in the 21st century, and it remains unique.",
"title": "Design"
},
{
"paragraph_id": 69,
"text": "Because of the way Concorde's delta-wing generated lift, the undercarriage had to be unusually strong and tall to allow for the angle of attack at low speed. At rotation, Concorde would rise to a high angle of attack, about 18 degrees. Prior to rotation, the wing generated almost no lift, unlike typical aircraft wings. Combined with the high airspeed at rotation (199 knots or 369 kilometres per hour or 229 miles per hour indicated airspeed), this increased the stresses on the main undercarriage in a way that was initially unexpected during the development and required a major redesign. Due to the high angle needed at rotation, a small set of wheels was added aft to prevent tailstrikes. The main undercarriage units swing towards each other to be stowed but due to their great height also needed to contract in length telescopically before swinging to clear each other when stowed.",
"title": "Design"
},
{
"paragraph_id": 70,
"text": "The four main wheel tyres on each bogie unit are inflated to 232 psi (1,600 kPa). The twin-wheel nose undercarriage retracts forwards and its tyres are inflated to a pressure of 191 psi (1,320 kPa), and the wheel assembly carries a spray deflector to prevent standing water from being thrown up into the engine intakes. The tyres are rated to a maximum speed on the runway of 250 mph (400 km/h). The starboard nose wheel carries a single disc brake to halt wheel rotation during retraction of the undercarriage. The port nose wheel carries speed generators for the anti-skid braking system which prevents brake activation until the nose and main wheels rotate at the same rate.",
"title": "Design"
},
{
"paragraph_id": 71,
"text": "Additionally, due to the high average take-off speed of 250 miles per hour (400 km/h), Concorde needed upgraded brakes. Like most airliners, Concorde has anti-skid braking – a system which prevents the tyres from losing traction when the brakes are applied for greater control during roll-out. The brakes, developed by Dunlop, were the first carbon-based brakes used on an airliner. The use of carbon over equivalent steel brakes provided a weight-saving of 1,200 lb (540 kg). Each wheel has multiple discs which are cooled by electric fans. Wheel sensors include brake overload, brake temperature, and tyre deflation. After a typical landing at Heathrow, brake temperatures were around 300–400 °C (570–750 °F). Landing Concorde required a minimum of 6,000 feet (1,800 m) runway length, this in fact being considerably less than the shortest runway Concorde ever actually landed on carrying commercial passengers, that of Cardiff Airport. Concorde G-AXDN (101), however, made its final landing at Duxford Aerodrome on 20 August 1977, which had a runway length of just 6,000 feet (1,800 m) at the time. This was the final aircraft to land at Duxford before the runway was shortened later that year.",
"title": "Design"
},
{
"paragraph_id": 72,
"text": "Concorde's drooping nose, developed by Marshall's of Cambridge, enabled the aircraft to switch from being streamlined to reduce drag and achieve optimal aerodynamic efficiency during flight, to not obstructing the pilot's view during taxi, take-off, and landing operations. Due to the high angle of attack, the long pointed nose obstructed the view and necessitated the ability to droop. The droop nose was accompanied by a moving visor that retracted into the nose prior to being lowered. When the nose was raised to horizontal, the visor would rise in front of the cockpit windscreen for aerodynamic streamlining.",
"title": "Design"
},
{
"paragraph_id": 73,
"text": "A controller in the cockpit allowed the visor to be retracted and the nose to be lowered to 5° below the standard horizontal position for taxiing and take-off. Following take-off and after clearing the airport, the nose and visor were raised. Prior to landing, the visor was again retracted and the nose lowered to 12.5° below horizontal for maximal visibility. Upon landing the nose was raised to the 5° position to avoid the possibility of damage due to collision with ground vehicles, and then raised fully before engine shutdown to prevent pooling of internal condensation within the radome seeping down into the aircraft's pitot/ADC system probes.",
"title": "Design"
},
{
"paragraph_id": 74,
"text": "The US Federal Aviation Administration had objected to the restrictive visibility of the visor used on the first two prototype Concordes, which had been designed before a suitable high-temperature window glass had become available, and thus requiring alteration before the FAA would permit Concorde to serve US airports. This led to the redesigned visor used in the production and the four pre-production aircraft (101, 102, 201, and 202). The nose window and visor glass, needed to endure temperatures in excess of 100 °C (210 °F) at supersonic flight, were developed by Triplex.",
"title": "Design"
},
{
"paragraph_id": 75,
"text": "Concorde 001 was modified with rooftop portholes for use on the 1973 Solar Eclipse mission and equipped with observation instruments. It performed the longest observation of a solar eclipse to date, about 74 minutes.",
"title": "Operational history"
},
{
"paragraph_id": 76,
"text": "Scheduled flights began on 21 January 1976 on the London–Bahrain and Paris–Rio de Janeiro (via Dakar) routes, with BA flights using the Speedbird Concorde call sign to notify air traffic control of the aircraft's unique abilities and restrictions, but the French using their normal call signs. The Paris-Caracas route (via Azores) began on 10 April. The US Congress had just banned Concorde landings in the US, mainly due to citizen protest over sonic booms, preventing launch on the coveted North Atlantic routes. The US Secretary of Transportation, William Coleman, gave permission for Concorde service to Dulles International Airport, and Air France and British Airways simultaneously began a thrice-weekly service to Dulles on 24 May 1976. Due to low demand, Air France cancelled its Washington service in October 1982, while British Airways cancelled it in November 1994.",
"title": "Operational history"
},
{
"paragraph_id": 77,
"text": "When the US ban on JFK Concorde operations was lifted in February 1977, New York banned Concorde locally. The ban came to an end on 17 October 1977 when the Supreme Court of the United States declined to overturn a lower court's ruling rejecting efforts by the Port Authority of New York and New Jersey and a grass-roots campaign led by Carol Berman to continue the ban. In spite of complaints about noise, the noise report noted that Air Force One, at the time a Boeing VC-137, was louder than Concorde at subsonic speeds and during take-off and landing. Scheduled service from Paris and London to New York's John F. Kennedy Airport began on 22 November 1977.",
"title": "Operational history"
},
{
"paragraph_id": 78,
"text": "In December 1977, British Airways and Singapore Airlines started sharing a Concorde for flights between London and Singapore International Airport at Paya Lebar via Bahrain. The aircraft, BA's Concorde G-BOAD, was painted in Singapore Airlines livery on the port side and British Airways livery on the starboard side. The service was discontinued after three return flights because of noise complaints from the Malaysian government; it could only be reinstated on a new route bypassing Malaysian airspace in 1979. A dispute with India prevented Concorde from reaching supersonic speeds in Indian airspace, so the route was eventually declared not viable and discontinued in 1980.",
"title": "Operational history"
},
{
"paragraph_id": 79,
"text": "During the Mexican oil boom, Air France flew Concorde twice weekly to Mexico City's Benito Juárez International Airport via Washington, DC, or New York City, from September 1978 to November 1982. The worldwide economic crisis during that period resulted in this route's cancellation; the last flights were almost empty. The routing between Washington or New York and Mexico City included a deceleration, from Mach 2.02 to Mach 0.95, to cross Florida subsonically and avoid creating a sonic boom over the state; Concorde then re-accelerated back to high speed while crossing the Gulf of Mexico. On 1 April 1989, on an around-the-world luxury tour charter, British Airways implemented changes to this routing that allowed G-BOAF to maintain Mach 2.02 by passing around Florida to the east and south. Periodically Concorde visited the region on similar chartered flights to Mexico City and Acapulco.",
"title": "Operational history"
},
{
"paragraph_id": 80,
"text": "From December 1978 to May 1980, Braniff International Airways leased 11 Concordes, five from Air France and six from British Airways. These were used on subsonic flights between Dallas–Fort Worth and Dulles International Airport, flown by Braniff flight crews. Air France and British Airways crews then took over for the continuing supersonic flights to London and Paris. The aircraft were registered in both the United States and their home countries; the European registration was covered while being operated by Braniff, retaining full AF/BA liveries. The flights were not profitable and typically less than 50% booked, forcing Braniff to end its tenure as the only US Concorde operator in May 1980.",
"title": "Operational history"
},
{
"paragraph_id": 81,
"text": "In its early years, the British Airways Concorde service had a greater number of \"no-shows\" (passengers who booked a flight and then failed to appear at the gate for boarding) than any other aircraft in the fleet.",
"title": "Operational history"
},
{
"paragraph_id": 82,
"text": "Following the launch of British Airways Concorde services, Britain's other major airline, British Caledonian (BCal), set up a task force headed by Gordon Davidson, BA's former Concorde director, to investigate the possibility of their own Concorde operations. This was seen as particularly viable for the airline's long-haul network as there were two unsold aircraft then available for purchase.",
"title": "Operational history"
},
{
"paragraph_id": 83,
"text": "One important reason for BCal's interest in Concorde was that the British Government's 1976 aviation policy review had opened the possibility of BA setting up supersonic services in competition with BCal's established sphere of influence. To counteract this potential threat, BCal considered their own independent Concorde plans, as well as a partnership with BA. BCal were considered most likely to have set up a Concorde service on the Gatwick–Lagos route, a major source of revenue and profits within BCal's scheduled route network; BCal's Concorde task force did assess the viability of a daily supersonic service complementing the existing subsonic widebody service on this route.",
"title": "Operational history"
},
{
"paragraph_id": 84,
"text": "BCal entered into a bid to acquire at least one Concorde. However, BCal eventually arranged for two aircraft to be leased from BA and Aérospatiale respectively, to be maintained by either BA or Air France. BCal's envisaged two-Concorde fleet would have required a high level of aircraft usage to be cost-effective; therefore, BCal had decided to operate the second aircraft on a supersonic service between Gatwick and Atlanta, with a stopover at either Gander or Halifax. Consideration was given to services to Houston and various points on its South American network at a later stage. Both supersonic services were to be launched at some point during 1980; however, steeply rising oil prices caused by the 1979 energy crisis led to BCal shelving their supersonic ambitions.",
"title": "Operational history"
},
{
"paragraph_id": 85,
"text": "By around 1981 in the UK, the future for Concorde looked bleak. The British government had lost money operating Concorde every year, and moves were afoot to cancel the service entirely. A cost projection came back with greatly reduced metallurgical testing costs because the test rig for the wings had built up enough data to last for 30 years and could be shut down. Despite this, the government was not keen to continue. In 1983, BA's managing director, Sir John King, convinced the government to sell the aircraft outright to the then state-owned British Airways for £16.5 million (equivalent to £46.32 million or US$59.12 million in 2019) plus the first year's profits. In 2003, Lord Heseltine, who was the minister responsible at the time, revealed to Alan Robb on BBC Radio 5 Live, that the aircraft had been sold for \"next to nothing\". Asked by Robb if it was the worst deal ever negotiated by a government minister, he replied \"That is probably right. But if you have your hands tied behind your back and no cards and a very skillful negotiator on the other side of the table... I defy you to do any [better].\" British Airways was subsequently privatised in 1987.",
"title": "Operational history"
},
{
"paragraph_id": 86,
"text": "Its estimated operating costs were $3,800 per block hour in 1972 (equivalent to $26,585 in 2022), compared to actual 1971 operating costs of $1,835 for a 707 and $3,500 for a 747 (equivalent to $13,260 and $25,291, respectively); for a 3,050 nmi (5,650 km) London–New York sector, a 707 cost $13,750 or 3.04¢ per seat/nmi (in 1971 dollars), a 747 $26,200 or 2.4¢ per seat/nmi and Concorde $14,250 or 4.5¢ per seat/nmi.",
"title": "Operational history"
},
{
"paragraph_id": 87,
"text": "In 1983, Pan Am accused the British Government of subsidising British Airways Concorde air fares, on which a return London–New York was £2,399 (£8,612 in 2021 prices), compared to £1,986 (£7,129) with a subsonic first class return, and London–Washington return was £2,426 (£8,709) instead of £2,258 (£8,106) subsonic.",
"title": "Operational history"
},
{
"paragraph_id": 88,
"text": "Concorde's unit cost was then $33.8 million ($180 million in 2022 dollars). British Airways and Air France benefited from a significantly reduced purchase price from the manufacturing consortium via their respective governments.",
"title": "Operational history"
},
{
"paragraph_id": 89,
"text": "The speed and premium service were relatively costly: in 1997, the round-trip ticket price from New York to London was $7,995 (equivalent to $14,600 in 2022), more than 30 times the cost of the least expensive scheduled flight for this route, however when compared with subsonic First Class on the same route, return tickets were only about 10-15% more expensive while flight time was cut in half.",
"title": "Operational history"
},
{
"paragraph_id": 90,
"text": "After on and off profitability, in 1982 Concorde was established in its own operating division (Concorde Division) under Capt. Brian Walpole and Capt. Jock Lowe. Their research revealed that passengers thought that the fare was higher than it actually was, so the airline raised ticket prices to match these perceptions and, following the successful marketing research and repositioning, Concorde ran profitably for British Airways. The ticket price was pitched above subsonic First Class but not as much as might be expected. In 1996 the Concorde return fare was £4,772 compared to £4,314 for subsonic First Class, adding to its corporate appeal. It developed a loyal following and earned over half a billion pounds in profit over the next 20 years with (typically) just 5 aircraft operating and 2 in various maintenance cycles.",
"title": "Operational history"
},
{
"paragraph_id": 91,
"text": "Between March 1984 and January 1991, British Airways flew a thrice-weekly Concorde service between London and Miami, stopping at Dulles International Airport. Until 2003, Air France and British Airways continued to operate the New York services daily. From 1987 to 2003 British Airways flew a Saturday morning Concorde service to Grantley Adams International Airport, Barbados, during the summer and winter holiday season.",
"title": "Operational history"
},
{
"paragraph_id": 92,
"text": "Prior to the Air France Paris crash, several UK and French tour operators operated charter flights to European destinations on a regular basis; the charter business was viewed as lucrative by British Airways and Air France.",
"title": "Operational history"
},
{
"paragraph_id": 93,
"text": "In 1997, British Airways held a promotional contest to mark the 10th anniversary of the airline's move into the private sector. The promotion was a lottery to fly to New York held for 190 tickets valued at £5,400 each, to be offered at £10. Contestants had to call a special hotline to compete with up to 20 million people.",
"title": "Operational history"
},
{
"paragraph_id": 94,
"text": "On 10 April 2003, Air France and British Airways simultaneously announced they would retire Concorde later that year. They cited low passenger numbers following the 25 July 2000 crash, the slump in air travel following the September 11 attacks, and rising maintenance costs: Airbus, the company that acquired Aérospatiale in 2000, had made a decision in 2003 to no longer supply replacement parts for the aircraft. Although Concorde was technologically advanced when introduced in the 1970s, 30 years later, its analogue cockpit was outdated. There had been little commercial pressure to upgrade Concorde due to a lack of competing aircraft, unlike other airliners of the same era such as the Boeing 747. By its retirement, it was the last aircraft in the British Airways fleet that had a flight engineer; other aircraft, such as the modernised 747-400, had eliminated the role.",
"title": "Operational history"
},
{
"paragraph_id": 95,
"text": "On 11 April 2003, Virgin Atlantic founder Sir Richard Branson announced that the company was interested in purchasing British Airways' Concorde fleet \"for the same price that they were given them for – one pound\". British Airways dismissed the idea, prompting Virgin to increase their offer to £1 million each. Branson claimed that when BA was privatised, a clause in the agreement required them to allow another British airline to operate Concorde if BA ceased to do so, but the Government denied the existence of such a clause. In October 2003, Branson wrote in The Economist that his final offer was \"over £5 million\" and that he had intended to operate the fleet \"for many years to come\". The chances for keeping Concorde in service were stifled by Airbus's lack of support for continued maintenance.",
"title": "Operational history"
},
{
"paragraph_id": 96,
"text": "It has been suggested that Concorde was not withdrawn for the reasons usually given but that it became apparent during the grounding of Concorde that the airlines could make more profit carrying first-class passengers subsonically. A lack of commitment to Concorde from Director of Engineering Alan MacDonald was cited as having undermined BA's resolve to continue operating Concorde.",
"title": "Operational history"
},
{
"paragraph_id": 97,
"text": "Other reasons why the attempted revival of Concorde never happened relate to the fact that the narrow fuselage did not allow for \"luxury\" features of subsonic air travel such as moving space, reclining seats and overall comfort. In the words of The Guardian's Dave Hall, \"Concorde was an outdated notion of prestige that left sheer speed the only luxury of supersonic travel.\"",
"title": "Operational history"
},
{
"paragraph_id": 98,
"text": "The general downturn in the commercial aviation industry after the September 11 attacks in 2001 and the end of maintenance support for Concorde by Airbus, the successor to Aérospatiale, contributed to the aircraft's retirement.",
"title": "Operational history"
},
{
"paragraph_id": 99,
"text": "Air France made its final commercial Concorde landing in the United States in New York City from Paris on 30 May 2003. Air France's final Concorde flight took place on 27 June 2003 when F-BVFC retired to Toulouse.",
"title": "Operational history"
},
{
"paragraph_id": 100,
"text": "An auction of Concorde parts and memorabilia for Air France was held at Christie's in Paris on 15 November 2003; 1,300 people attended, and several lots exceeded their predicted values. French Concorde F-BVFC was retired to Toulouse and kept functional for a short time after the end of service, in case taxi runs were required in support of the French judicial enquiry into the 2000 crash. The aircraft is now fully retired and no longer functional.",
"title": "Operational history"
},
{
"paragraph_id": 101,
"text": "French Concorde F-BTSD has been retired to the \"Musée de l'Air\" at Paris–Le Bourget Airport near Paris; unlike the other museum Concordes, a few of the systems are being kept functional. For instance, the famous \"droop nose\" can still be lowered and raised. This led to rumours that they could be prepared for future flights for special occasions.",
"title": "Operational history"
},
{
"paragraph_id": 102,
"text": "French Concorde F-BVFB is at the Auto & Technik Museum Sinsheim at Sinsheim, Germany, after its last flight from Paris to Baden-Baden, followed by transport to Sinsheim via barge and road. The museum also has a Tupolev Tu-144 on display – this is the only place where both supersonic airliners can be seen together.",
"title": "Operational history"
},
{
"paragraph_id": 103,
"text": "In 1989, Air France signed a letter of agreement to donate a Concorde to the National Air and Space Museum in Washington D.C. upon the aircraft's retirement. On 12 June 2003, Air France honoured that agreement, donating Concorde F-BVFA (serial 205) to the museum upon the completion of its last flight. This aircraft was the first Air France Concorde to open service to Rio de Janeiro, Washington, D.C., and New York and had flown 17,824 hours. It is on display at the Smithsonian's Steven F. Udvar-Hazy Center at Dulles International Airport.",
"title": "Operational history"
},
{
"paragraph_id": 104,
"text": "British Airways conducted a North American farewell tour in October 2003. G-BOAG visited Toronto Pearson International Airport on 1 October, after which it flew to New York's John F. Kennedy International Airport. G-BOAD visited Boston's Logan International Airport on 8 October, and G-BOAG visited Dulles International Airport on 14 October.",
"title": "Operational history"
},
{
"paragraph_id": 105,
"text": "In a week of farewell flights around the United Kingdom, Concorde visited Birmingham on 20 October, Belfast on 21 October, Manchester on 22 October, Cardiff on 23 October, and Edinburgh on 24 October. Each day the aircraft made a return flight out and back into Heathrow to the cities, often overflying them at low altitude. On 22 October, both Concorde flight BA9021C, a special from Manchester, and BA002 from New York landed simultaneously on both of Heathrow's runways. On 23 October 2003, the Queen consented to the illumination of Windsor Castle, an honour reserved for state events and visiting dignitaries, as Concorde's last west-bound commercial flight departed London.",
"title": "Operational history"
},
{
"paragraph_id": 106,
"text": "British Airways retired its Concorde fleet on 24 October 2003. G-BOAG left New York to a fanfare similar to that given for Air France's F-BTSD, while two more made round trips, G-BOAF over the Bay of Biscay, carrying VIP guests including former Concorde pilots, and G-BOAE to Edinburgh. The three aircraft then circled over London, having received special permission to fly at low altitude, before landing in sequence at Heathrow. The captain of the New York to London flight was Mike Bannister. The final flight of a Concorde in the US occurred on 5 November 2003 when G-BOAG flew from New York's JFK Airport to Seattle's Boeing Field to join the Museum of Flight's permanent collection. The plane was piloted by Mike Bannister and Les Broadie, who claimed a flight time of three hours, 55 minutes and 12 seconds, a record between the two cities that was made possible by Canada granting use of a supersonic corridor between Chibougamau, Quebec, and Peace River, Alberta. The museum had been pursuing a Concorde for their collection since 1984. The final flight of a Concorde worldwide took place on 26 November 2003 with a landing at Bristol Filton Airport.",
"title": "Operational history"
},
{
"paragraph_id": 107,
"text": "All of BA's Concorde fleet have been grounded, drained of hydraulic fluid and their airworthiness certificates withdrawn. Jock Lowe, ex-chief Concorde pilot and manager of the fleet, estimated in 2004 that it would cost £10–15 million to make G-BOAF airworthy again. BA maintain ownership and have stated that they will not fly again due to a lack of support from Airbus. On 1 December 2003, Bonhams held an auction of British Airways Concorde artefacts, including a nose cone, at Kensington Olympia in London. Proceeds of around £750,000 were raised, with the majority going to charity. G-BOAD is currently on display at the Intrepid Sea, Air & Space Museum in New York. In 2007, BA announced that the advertising spot at Heathrow where a 40% scale model of Concorde was located would not be retained; the model is now on display at the Brooklands Museum, in Surrey, England.",
"title": "Operational history"
},
{
"paragraph_id": 108,
"text": "Concorde G-BBDG was used for test flying and trials work. It was retired in 1981 and then only used for spares. It was dismantled and transported by road from Filton to the Brooklands Museum, where it was restored from essentially a shell. It remains open to visitors to the museum, and wears the original Negus & Negus livery worn by the Concorde fleet during their initial years of service with BA.",
"title": "Operational history"
},
{
"paragraph_id": 109,
"text": "Concorde G-BOAB, call sign Alpha Bravo, was never modified and returned to service with the rest of British Airways' fleet, and has remained at London Heathrow Airport since its final flight, a ferry flight from JFK in 2000. Although the aircraft was effectively retired, G-BOAB was used as a test aircraft for the Project Rocket interiors that were in the process of being added to the rest of BA's fleet. G-BOAB has been towed around Heathrow on various occasions; it currently occupies a space on the airport's apron and is regularly visible to aircraft moving around the airport.",
"title": "Operational history"
},
{
"paragraph_id": 110,
"text": "One of the youngest Concordes (F-BTSD) is on display at Le Bourget Air and Space Museum in Paris. In February 2010, it was announced that the museum and a group of volunteer Air France technicians intend to restore F-BTSD so it can taxi under its own power. In May 2010, it was reported that the British Save Concorde Group and French Olympus 593 groups had begun inspecting the engines of a Concorde at the French museum; their intent was to restore the airliner to a condition where it could fly in demonstrations.",
"title": "Operational history"
},
{
"paragraph_id": 111,
"text": "G-BOAF forms the centrepiece of the Aerospace Bristol museum at Filton, which opened to the public in 2017.",
"title": "Operational history"
},
{
"paragraph_id": 112,
"text": "G-BOAD, the aircraft that holds the record for the Heathrow – JFK crossing at 2 hours, 52 minutes, and 59 seconds, is on display at the Intrepid Sea, Air & Space Museum in New York.",
"title": "Operational history"
},
{
"paragraph_id": 113,
"text": "F-BVFB is displayed at the Technik Museum Sinsheim in Germany alongside a Tu-144; this is the only instance of both supersonic passenger aircraft on display together.",
"title": "Operational history"
},
{
"paragraph_id": 114,
"text": "On 25 July 2000, Air France Flight 4590, registration F-BTSC, crashed in Gonesse, France, after departing from Charles de Gaulle Airport en route to John F. Kennedy International Airport in New York City, killing all 100 passengers and nine crew members on board as well as four people on the ground. It was the only fatal accident involving Concorde. This crash also damaged Concorde's reputation and caused both British Airways and Air France to temporarily ground their fleets until modifications that involved strengthening the affected areas of the aircraft had been made.",
"title": "Accidents and incidents"
},
{
"paragraph_id": 115,
"text": "According to the official investigation conducted by the Bureau of Enquiry and Analysis for Civil Aviation Safety (BEA), the crash was caused by a metallic strip that had fallen from a Continental Airlines DC-10 that had taken off minutes earlier. This fragment punctured a tyre on Concorde's left main wheel bogie during take-off. The tyre exploded, and a piece of rubber hit the fuel tank, which caused a fuel leak and led to a fire. The crew shut down engine number 2 in response to a fire warning, and with engine number 1 surging and producing little power, the aircraft was unable to gain altitude or speed. The aircraft entered a rapid pitch-up then a sudden descent, rolling left and crashing tail-low into the Hôtelissimo Les Relais Bleus Hotel in Gonesse.",
"title": "Accidents and incidents"
},
{
"paragraph_id": 116,
"text": "The claim that a metallic strip caused the crash was disputed during the trial both by witnesses (including the pilot of then French President Jacques Chirac's aircraft that had just landed on an adjacent runway when Flight 4590 caught fire) and by an independent French TV investigation that found a wheel spacer had not been installed in the left-side main gear and that the plane caught fire some 1,000 feet from where the metallic strip lay. British investigators and former French Concorde pilots looked at several other possibilities that the BEA report ignored, including an unbalanced weight distribution in the fuel tanks and loose landing gear. They came to the conclusion that the Concorde veered off course on the runway, which reduced takeoff speed below the crucial minimum. John Hutchinson, who had served as a Concorde captain for 15 years with British Airways, said \"the fire on its own should have been 'eminently survivable; the pilot should have been able to fly his way out of trouble'\", had it not been for a \"lethal combination of operational error and 'negligence' by the maintenance department of Air France\" that \"nobody wants to talk about\".",
"title": "Accidents and incidents"
},
{
"paragraph_id": 117,
"text": "However some of Hutchinson's claims are disputed and are directly contradicted by the Cockpit Voice Recording (CVR) (BEA Report pp. 46-48) and the Air France procedures manuals which differed from those of British Airways in several key but crucial aspects. The BEA accident report did explore in detail, the undercarriage bogey spacer issue and concluded that it did not contribute in any significant way to the accident. It also confirmed the titanium strip was the initiating cause by examining and matching it and the damage to the tyre.",
"title": "Accidents and incidents"
},
{
"paragraph_id": 118,
"text": "On 6 December 2010, Continental Airlines and John Taylor, a mechanic who installed the metal strip, were found guilty of involuntary manslaughter; however, on 30 November 2012, a French court overturned the conviction, saying mistakes by Continental and Taylor did not make them criminally responsible.",
"title": "Accidents and incidents"
},
{
"paragraph_id": 119,
"text": "Before the accident, Concorde had been arguably the safest operational passenger airliner in the world with zero passenger deaths-per-kilometres travelled; but there had been two prior non-fatal accidents and a rate of tyre damage some 30 times higher than subsonic airliners from 1995 to 2000. Safety improvements were made in the wake of the crash, including more secure electrical controls, Kevlar lining on the fuel tanks and specially developed burst-resistant tyres. The first flight with the modifications departed from London Heathrow on 17 July 2001, piloted by BA Chief Concorde Pilot Mike Bannister. During the 3-hour 20-minute flight over the mid-Atlantic towards Iceland, Bannister attained Mach 2.02 and 60,000 ft (18,000 m) before returning to RAF Brize Norton. The test flight, intended to resemble the London–New York route, was declared a success and was watched on live TV, and by crowds on the ground at both locations.",
"title": "Accidents and incidents"
},
{
"paragraph_id": 120,
"text": "The first flight with passengers after the 2000 grounding for safety modifications landed shortly before the World Trade Center attacks in the United States. This was not a commercial flight: all the passengers were BA employees. Normal commercial operations resumed on 7 November 2001 by BA and AF (aircraft G-BOAE and F-BTSD), with service to New York JFK, where Mayor Rudy Giuliani greeted the passengers.",
"title": "Accidents and incidents"
},
{
"paragraph_id": 121,
"text": "Concorde had suffered two previous non-fatal accidents that were similar to each other.",
"title": "Accidents and incidents"
},
{
"paragraph_id": 122,
"text": "On 12 April 1989, Concorde G-BOAF, on a chartered flight from Christchurch, New Zealand, to Sydney, suffered a structural failure in-flight at supersonic speed. As the aircraft was climbing and accelerating through Mach 1.7, a \"thud\" was heard. The crew did not notice any handling problems, and they assumed the thud they heard was a minor engine surge. No further difficulty was encountered until descent through 40,000 feet (12,000 m) at Mach 1.3, when a vibration was felt throughout the aircraft, lasting two to three minutes. Most of the upper rudder had become separated from the aircraft at this point. Aircraft handling was unaffected, and the aircraft made a safe landing at Sydney. The UK's Air Accidents Investigation Branch (AAIB) concluded that the skin of the rudder had been separating from the rudder structure over a period of time before the accident due to moisture seepage past the rivets in the rudder. Furthermore, production staff had not followed proper procedures during an earlier modification of the rudder, but the procedures were difficult to adhere to. The aircraft was repaired and returned to service.",
"title": "Accidents and incidents"
},
{
"paragraph_id": 123,
"text": "On 21 March 1992, G-BOAB while flying British Airways Flight 01 from London to New York, also suffered a structural failure in-flight at supersonic speed. While cruising at Mach 2, at approximately 53,000 feet (16,000 m) above mean sea level, the crew heard a \"thump\". No difficulties in handling were noticed, and no instruments gave any irregular indications. This crew also suspected there had been a minor engine surge. One hour later, during descent and while decelerating below Mach 1.4, a sudden \"severe\" vibration began throughout the aircraft. The vibration worsened when power was added to the No 2 engine, and it was attenuated when that engine's power was reduced. The crew shut down the No 2 engine and made a successful landing in New York, noting only that increased rudder control was needed to keep the aircraft on its intended approach course. Again, the skin had become separated from the structure of the rudder, which led to most of the upper rudder becoming separated in-flight. The AAIB concluded that repair materials had leaked into the structure of the rudder during a recent repair, weakening the bond between the skin and the structure of the rudder, leading to it breaking up in-flight. The large size of the repair had made it difficult to keep repair materials out of the structure, and prior to this accident, the severity of the effect of these repair materials on the structure and skin of the rudder was not appreciated.",
"title": "Accidents and incidents"
},
{
"paragraph_id": 124,
"text": "The 2010 trial involving Continental Airlines over the crash of Flight 4590 established that from 1976 until Flight 4590 there had been 57 tyre failures involving Concordes during takeoffs, including a near-crash at Dulles International Airport on 14 June 1979 involving Air France Flight 54 where a tyre blowout pierced the plane's fuel tank and damaged the port-side engine and electrical cables, with the loss of two of the craft's hydraulic systems.",
"title": "Accidents and incidents"
},
{
"paragraph_id": 125,
"text": "Of the 20 aircraft built, 18 remain, with 16 on display to the public.",
"title": "Aircraft on display"
},
{
"paragraph_id": 126,
"text": "",
"title": "Aircraft on display"
},
{
"paragraph_id": 127,
"text": "Concorde was one of only two supersonic jetliner models to operate commercially; the other was the Soviet-built Tupolev Tu-144, which operated in the late 1970s. The Tu-144 was nicknamed \"Concordski\" by Western European journalists for its outward similarity to Concorde. It had been alleged that Soviet espionage efforts had resulted in the theft of Concorde blueprints, supposedly to assist in the design of the Tu-144. As a result of a rushed development programme, the first Tu-144 prototype was substantially different from the preproduction machines, but both were cruder than Concorde. The Tu-144S had a significantly shorter range than Concorde. Jean Rech, Sud Aviation, attributed this to two things, a very heavy powerplant with an intake twice as long as that on Concorde, and low-bypass turbofan engines with too-high a bypass ratio which needed afterburning for cruise. The aircraft had poor control at low speeds because of a simpler supersonic wing design. In addition the Tu-144 required braking parachutes to land while Concorde used anti-lock brakes. The Tu-144 had two crashes, one at the 1973 Paris Air Show, and another during a pre-delivery test flight in May 1978.",
"title": "Comparable aircraft"
},
{
"paragraph_id": 128,
"text": "The later production Tu-144 versions were more refined and competitive. The Tu-144D had Kolesov RD-36-51 turbojet engines providing greater fuel efficiency, cruising speed and a maximum range of 6,500 km, near Concorde's maximum range of 6,667 km. Passenger service commenced in November 1977, but after the 1978 crash the aircraft was taken out of passenger service after only 55 flights, which carried an average of 58 passengers. The Tu-144 had an inherently unsafe structural design as a consequence of an automated production method chosen to simplify and speed up manufacturing. The Tu-144 program was cancelled by the Soviet government on 1 July 1983.",
"title": "Comparable aircraft"
},
{
"paragraph_id": 129,
"text": "The main competing designs for the US government-funded SST were the swing-wing Boeing 2707 and the compound delta wing Lockheed L-2000. These were to have been larger, with seating for up to 300 people. The Boeing 2707 was selected for development. Concorde first flew in 1969, the year Boeing began building 2707 mockups after changing the design to a cropped delta wing; the cost of this and other changes helped to kill the project. The operation of US military aircraft such as the Mach 3+ North American XB-70 Valkyrie prototypes and Convair B-58 Hustler strategic nuclear bomber had shown that sonic booms were quite capable of reaching the ground, and the experience from the Oklahoma City sonic boom tests led to the same environmental concerns that hindered the commercial success of Concorde. The American government cancelled its SST project in 1971 having spent more than $1 billion without any aircraft being built.",
"title": "Comparable aircraft"
},
{
"paragraph_id": 130,
"text": "Before Concorde's flight trials, developments in the civil aviation industry were largely accepted by governments and their respective electorates. Opposition to Concorde's noise, particularly on the east coast of the United States, forged a new political agenda on both sides of the Atlantic, with scientists and technology experts across a multitude of industries beginning to take the environmental and social impact more seriously. Although Concorde led directly to the introduction of a general noise abatement programme for aircraft flying out of John F. Kennedy Airport, many found that Concorde was quieter than expected, partly due to the pilots temporarily throttling back their engines to reduce noise during overflight of residential areas. Even before commercial flights started, it had been claimed that Concorde was quieter than many other aircraft. In 1971, BAC's technical director was quoted as saying, \"It is certain on present evidence and calculations that in the airport context, production Concordes will be no worse than aircraft now in service and will in fact be better than many of them.\"",
"title": "Impact"
},
{
"paragraph_id": 131,
"text": "Concorde produced nitrogen oxides in its exhaust, which, despite complicated interactions with other ozone-depleting chemicals, are understood to result in degradation to the ozone layer at the stratospheric altitudes it cruised. It has been pointed out that other, lower-flying, airliners produce ozone during their flights in the troposphere, but vertical transit of gases between the layers is restricted. The small fleet meant overall ozone-layer degradation caused by Concorde was negligible. In 1995, David Fahey, of the National Oceanic and Atmospheric Administration in the United States, warned that a fleet of 500 supersonic aircraft with exhausts similar to Concorde might produce a 2 per cent drop in global ozone levels, much higher than previously thought. Each 1 per cent drop in ozone is estimated to increase the incidence of non-melanoma skin cancer worldwide by 2 per cent. Dr Fahey said if these particles are produced by highly oxidised sulphur in the fuel, as he believed, then removing sulphur in the fuel will reduce the ozone-destroying impact of supersonic transport.",
"title": "Impact"
},
{
"paragraph_id": 132,
"text": "Concorde's technical leap forward boosted the public's understanding of conflicts between technology and the environment as well as awareness of the complex decision analysis processes that surround such conflicts. In France, the use of acoustic fencing alongside TGV tracks might not have been achieved without the 1970s controversy over aircraft noise. In the UK, the CPRE has issued tranquillity maps since 1990.",
"title": "Impact"
},
{
"paragraph_id": 133,
"text": "Concorde was normally perceived as a privilege of the rich, but special circular or one-way (with return by other flight or ship) charter flights were arranged to bring a trip within the means of moderately well-off enthusiasts.",
"title": "Impact"
},
{
"paragraph_id": 134,
"text": "The aircraft was usually referred to by the British as simply \"Concorde\". In France it was known as \"le Concorde\" due to \"le\", the definite article, used in French grammar to introduce the name of a ship or aircraft, and the capital being used to distinguish a proper name from a common noun of the same spelling. In French, the common noun concorde means \"agreement, harmony, or peace\". Concorde's pilots and British Airways in official publications often refer to Concorde both in the singular and plural as \"she\" or \"her\".",
"title": "Impact"
},
{
"paragraph_id": 135,
"text": "As a symbol of national pride, an example from the BA fleet made occasional flypasts at selected Royal events, major air shows and other special occasions, sometimes in formation with the Red Arrows. On the final day of commercial service, public interest was so great that grandstands were erected at Heathrow Airport. Significant numbers of people attended the final landings; the event received widespread media coverage.",
"title": "Impact"
},
{
"paragraph_id": 136,
"text": "In 2006, 37 years after its first test flight, Concorde was announced the winner of the Great British Design Quest organised by the BBC (through The Culture Show) and the Design Museum. A total of 212,000 votes were cast with Concorde beating other British design icons such as the Mini, mini skirt, Jaguar E-Type car, the Tube map, the World Wide Web, the K2 red telephone box and the Supermarine Spitfire.",
"title": "Impact"
},
{
"paragraph_id": 137,
"text": "The heads of France and the United Kingdom flew in Concorde many times. Presidents Georges Pompidou, Valéry Giscard d'Estaing and François Mitterrand regularly used Concorde as French flagman aircraft in foreign visits. Queen Elizabeth II and Prime Ministers Edward Heath, Jim Callaghan, Margaret Thatcher, John Major and Tony Blair took Concorde in some charter flights such as the Queen's trips to Barbados on her Silver Jubilee in 1977, in 1987 and in 2003, to the Middle East in 1984 and to the United States in 1991. Pope John Paul II flew on Concorde in May 1989.",
"title": "Impact"
},
{
"paragraph_id": 138,
"text": "Concorde sometimes made special flights for demonstrations, air shows (such as the Farnborough, Paris-Le Bourget, Oshkosh AirVenture and MAKS air shows) as well as parades and celebrations (for example, of Zurich Airport's anniversary in 1998). The aircraft were also used for private charters (including by the President of Zaire Mobutu Sese Seko on multiple occasions), for advertising companies (including for the firm OKI), for Olympic torch relays (1992 Winter Olympics in Albertville) and for observing solar eclipses, including the solar eclipse of 30 June 1973 and again for the total solar eclipse on 11 August 1999.",
"title": "Impact"
},
{
"paragraph_id": 139,
"text": "The fastest transatlantic airliner flight was from New York JFK to London Heathrow on 7 February 1996 by the British Airways G-BOAD in 2 hours, 52 minutes, 59 seconds from take-off to touchdown aided by a 175 mph (282 km/h) tailwind. On 13 February 1985, a Concorde charter flight flew from London Heathrow to Sydney—on the opposite side of the world—in a time of 17 hours, 3 minutes and 45 seconds, including refuelling stops.",
"title": "Impact"
},
{
"paragraph_id": 140,
"text": "Concorde set the FAI \"Westbound Around the World\" and \"Eastbound Around the World\" world air speed records. On 12–13 October 1992, in commemoration of the 500th anniversary of Columbus' first voyage to the New World, Concorde Spirit Tours (US) chartered Air France Concorde F-BTSD and circumnavigated the world in 32 hours 49 minutes and 3 seconds, from Lisbon, Portugal, including six refuelling stops at Santo Domingo, Acapulco, Honolulu, Guam, Bangkok, and Bahrain.",
"title": "Impact"
},
{
"paragraph_id": 141,
"text": "The eastbound record was set by the same Air France Concorde (F-BTSD) under charter to Concorde Spirit Tours in the US on 15–16 August 1995. This promotional flight circumnavigated the world from New York/JFK International Airport in 31 hours 27 minutes 49 seconds, including six refuelling stops at Toulouse, Dubai, Bangkok, Andersen AFB in Guam, Honolulu, and Acapulco. By its 30th flight anniversary on 2 March 1999 Concorde had clocked up 920,000 flight hours, with more than 600,000 supersonic, many more than all of the other supersonic aircraft in the Western world combined.",
"title": "Impact"
},
{
"paragraph_id": 142,
"text": "On its way to the Museum of Flight in November 2003, G-BOAG set a New York City-to-Seattle speed record of 3 hours, 55 minutes, and 12 seconds. Due to the restrictions on supersonic overflights within the US the flight was granted permission by the Canadian authorities for the majority of the journey to be flown supersonically over sparsely-populated Canadian territory.",
"title": "Impact"
},
{
"paragraph_id": 143,
"text": "Data from The Wall Street Journal, The Concorde Story, The International Directory of Civil Aircraft, Aérospatiale/BAC Concorde 1969 onwards (all models)",
"title": "Specifications"
},
{
"paragraph_id": 144,
"text": "General characteristics",
"title": "Specifications"
},
{
"paragraph_id": 145,
"text": "Performance",
"title": "Specifications"
},
{
"paragraph_id": 146,
"text": "Avionics",
"title": "Specifications"
}
] | Concorde is a retired Franco-British supersonic airliner jointly developed and manufactured by Sud Aviation and the British Aircraft Corporation (BAC). Studies started in 1954, and France and the UK signed a treaty establishing the development project on 29 November 1962, as the programme cost was estimated at £70 million.
Construction of the six prototypes began in February 1965, and the first flight took off from Toulouse on 2 March 1969. The market was predicted for 350 aircraft, and the manufacturers received up to 100 option orders from many major airlines.
On 9 October 1975, it received its French Certificate of Airworthiness, and from the UK CAA on 5 December. Concorde is a tailless aircraft design with a narrow fuselage permitting 4-abreast seating for 92 to 128 passengers, an ogival delta wing and a droop nose for landing visibility.
It is powered by four Rolls-Royce/Snecma Olympus 593 turbojets with variable engine intake ramps, and reheat for take-off and acceleration to supersonic speed.
Constructed out of aluminium, it was the first airliner to have analogue fly-by-wire flight controls. The airliner could maintain a supercruise up to Mach 2.04 at an altitude of 60,000 ft (18.3 km). Delays and cost overruns increased the programme cost to £1.5–2.1 billion in 1976,.
Concorde entered service on 21 January of that year with Air France from Paris-Roissy and British Airways from London Heathrow.
Transatlantic flights were the main market, to Washington Dulles from 24 May, and to New York JFK from 17 October 1977.
Air France and British Airways remained the sole customers with seven airframes each, for a total production of twenty.
Supersonic flight more than halved travel times, but sonic booms over the ground limited it to transoceanic flights only. Its only competitor was the Tupolev Tu-144, carrying passengers from November 1977 until a May 1978 crash, while a potential competitor, the Boeing 2707, was cancelled in 1971 before any prototypes were built. On 25 July 2000, Air France Flight 4590 crashed shortly after take-off with all 109 occupants and four on the ground killed. This was the only fatal incident involving Concorde; commercial service was suspended until November 2001. The Concorde aircraft were retired in 2003, 27 years after commercial operations had begun. Most of the aircraft remain on display in Europe and America. | 2001-11-08T18:01:06Z | 2023-12-31T00:17:32Z | [
"Template:Cite journal",
"Template:Use dmy dates",
"Template:Efn",
"Template:Sud/Aérospatiale aircraft",
"Template:Use British English",
"Template:Format price",
"Template:Clear left",
"Template:Cite press release",
"Template:Authority control",
"Template:Sfrac",
"Template:Inflation-year",
"Template:IPAc-en",
"Template:Main",
"Template:Refend",
"Template:Infobox aircraft type",
"Template:GBPConvert",
"Template:Reflist",
"Template:Cite web",
"Template:British Aircraft Corporation aircraft",
"Template:Short description",
"Template:Portal",
"Template:Cite book",
"Template:Other uses",
"Template:Inflation",
"Template:Convert",
"Template:Rp",
"Template:Cn",
"Template:Refn",
"Template:IPA-fr",
"Template:Pn",
"Template:Cite tech report",
"Template:Cite magazine",
"Template:Cite conference",
"Template:BAE aircraft",
"Template:Supersonic transport",
"Template:Good article",
"Template:Blockquote",
"Template:Refbegin",
"Template:Webarchive",
"Template:Cbignore",
"Template:Aircraft specs",
"Template:Cite news",
"Template:Commons category",
"Template:Infobox aircraft begin",
"Template:Multiple image",
"Template:Abbr",
"Template:Sfn",
"Template:Inflation-fn",
"Template:Further",
"Template:Notelist",
"Template:ISBN",
"Template:Citation",
"Template:Inflation/year",
"Template:Cvt",
"Template:Dead link",
"Template:Em",
"Template:See also"
] | https://en.wikipedia.org/wiki/Concorde |
7,053 | Cannon | A cannon is a large-caliber gun classified as a type of artillery, which usually launches a projectile using explosive chemical propellant. Gunpowder ("black powder") was the primary propellant before the invention of smokeless powder during the late 19th century. Cannons vary in gauge, effective range, mobility, rate of fire, angle of fire and firepower; different forms of cannon combine and balance these attributes in varying degrees, depending on their intended use on the battlefield. A cannon is a type of heavy artillery weapon.
The word cannon is derived from several languages, in which the original definition can usually be translated as tube, cane, or reed. In the modern era, the term cannon has fallen into decline, replaced by guns or artillery, if not a more specific term such as howitzer or mortar, except for high-caliber automatic weapons firing bigger rounds than machine guns, called autocannons.
The earliest known depiction of cannons appeared in Song dynasty China as early as the 12th century; however, solid archaeological and documentary evidence of cannons do not appear until the 13th century. In 1288, Yuan dynasty troops are recorded to have used hand cannon in combat, and the earliest extant cannon bearing a date of production comes from the same period. By the early 14th century, possible mentions of cannon had appeared in the Middle East and the depiction of one in Europe by 1326. Recorded usage of cannon began appearing almost immediately after. They subsequently spread to India, their usage on the subcontinent being first attested to in 1366. By the end of the 14th century, cannons were widespread throughout Eurasia.
Cannons were used primarily as anti-infantry weapons until around 1374, when large cannons were recorded to have breached walls for the first time in Europe. Cannons featured prominently as siege weapons, and ever larger pieces appeared. In 1464 a 16,000 kg (35,000 lb) cannon known as the Great Turkish Bombard was created in the Ottoman Empire. Cannons as field artillery became more important after 1453, with the introduction of limber, which greatly improved cannon maneuverability and mobility. European cannons reached their longer, lighter, more accurate, and more efficient "classic form" around 1480. This classic European cannon design stayed relatively consistent in form with minor changes until the 1750s.
The word cannon is derived from the Old Italian word cannone, meaning "large tube", which came from the Latin canna, in turn originating from the Greek κάννα (kanna), "reed", and then generalised to mean any hollow tube-like object; cognate with the Akkadian qanu(m) and the Hebrew qāneh, "tube, reed". The word has been used to refer to a gun since 1326 in Italy, and 1418 in England. Both of the plural forms cannons and cannon are correct.
The cannon may have appeared as early as the 12th century in China, and was probably a parallel development or evolution of the fire-lance, a short ranged anti-personnel weapon combining a gunpowder-filled tube and a polearm of some sort. Co-viative projectiles such as iron scraps or porcelain shards were placed in fire lance barrels at some point, and eventually, the paper and bamboo materials of fire lance barrels were replaced by metal.
The earliest known depiction of a cannon is a sculpture from the Dazu Rock Carvings in Sichuan dated to 1128, however, the earliest archaeological samples and textual accounts do not appear until the 13th century. The primary extant specimens of cannon from the 13th century are the Wuwei Bronze Cannon dated to 1227, the Heilongjiang hand cannon dated to 1288, and the Xanadu Gun dated to 1298. However, only the Xanadu gun contains an inscription bearing a date of production, so it is considered the earliest confirmed extant cannon. The Xanadu Gun is 34.7 cm in length and weighs 6.2 kg. The other cannons are dated using contextual evidence. The Heilongjiang hand cannon is also often considered by some to be the oldest firearm since it was unearthed near the area where the History of Yuan reports a battle took place involving hand cannons. According to the History of Yuan, in 1288, a Jurchen commander by the name of Li Ting led troops armed with hand cannons into battle against the rebel prince Nayan.
Chen Bingying argues there were no guns before 1259, while Dang Shoushan believes the Wuwei gun and other Western Xia era samples point to the appearance of guns by 1220, and Stephen Haw goes even further by stating that guns were developed as early as 1200. Sinologist Joseph Needham and renaissance siege expert Thomas Arnold provide a more conservative estimate of around 1280 for the appearance of the "true" cannon. Whether or not any of these are correct, it seems likely that the gun was born sometime during the 13th century.
References to cannons proliferated throughout China in the following centuries. Cannon featured in literary pieces. In 1341 Xian Zhang wrote a poem called The Iron Cannon Affair describing a cannonball fired from an eruptor which could "pierce the heart or belly when striking a man or horse, and even transfix several persons at once." By the 1350s the cannon was used extensively in Chinese warfare. In 1358 the Ming army failed to take a city due to its garrisons' usage of cannon, however, they themselves would use cannon, in the thousands, later on during the siege of Suzhou in 1366.
The Mongol invasion of Java in 1293 brought gunpowder technology to the Nusantara archipelago in the form of cannon (Chinese: Pao). During the Ming dynasty cannons were used in riverine warfare at the Battle of Lake Poyang. One shipwreck in Shandong had a cannon dated to 1377 and an anchor dated to 1372. From the 13th to 15th centuries cannon-armed Chinese ships also travelled throughout Southeast Asia. Cannon appeared in Đại Việt by 1390 at the latest.
The first of the western cannon to be introduced were breech-loaders in the early 16th century, which the Chinese began producing themselves by 1523 and improved on by including composite metal construction in their making.
Japan did not acquire cannon until 1510 when a monk brought one back from China, and did not produce any in appreciable numbers. During the 1593 Siege of Pyongyang, 40,000 Ming troops deployed a variety of cannons against Japanese troops. Despite their defensive advantage and the use of arquebus by Japanese soldiers, the Japanese were at a severe disadvantage due to their lack of cannon. Throughout the Japanese invasions of Korea (1592–1598), the Ming–Joseon coalition used artillery widely in land and naval battles, including on the turtle ships of Yi Sun-sin.
According to Ivan Petlin, the first Russian envoy to Beijing, in September 1619, the city was armed with large cannon with cannonballs weighing more than 30 kg (66 lb). His general observation was that the Chinese were militarily capable and had firearms:
There are many merchants and military persons in the Chinese Empire. They have firearms, and the Chinese are very skillful in military affairs. They go into battle against the Yellow Mongols who fight with bows and arrows.
Outside of China, the earliest texts to mention gunpowder are Roger Bacon's Opus Majus (1267) and Opus Tertium in what has been interpreted as references to firecrackers. In the early 20th century, a British artillery officer proposed that another work tentatively attributed to Bacon, Epistola de Secretis Operibus Artis et Naturae, et de Nullitate Magiae, dated to 1247, contained an encrypted formula for gunpowder hidden in the text. These claims have been disputed by science historians. In any case, the formula itself is not useful for firearms or even firecrackers, burning slowly and producing mostly smoke.
There is a record of a gun in Europe dating to 1322 being discovered in the nineteenth century but the artifact has since been lost. The earliest known European depiction of a gun appeared in 1326 in a manuscript by Walter de Milemete, although not necessarily drawn by him, known as De Nobilitatibus, sapientii et prudentiis regum (Concerning the Majesty, Wisdom, and Prudence of Kings), which displays a gun with a large arrow emerging from it and its user lowering a long stick to ignite the gun through the touch hole. In the same year, another similar illustration showed a darker gun being set off by a group of knights, which also featured in another work of de Milemete's, De secretis secretorum Aristotelis. On 11 February of that same year, the Signoria of Florence appointed two officers to obtain canones de mettallo and ammunition for the town's defense. In the following year a document from the Turin area recorded a certain amount was paid "for the making of a certain instrument or device made by Friar Marcello for the projection of pellets of lead". A reference from 1331 describes an attack mounted by two Germanic knights on Cividale del Friuli, using man-portable gunpowder weapons of some sort. The 1320s seem to have been the takeoff point for guns in Europe according to most modern military historians. Scholars suggest that the lack of gunpowder weapons in a well-traveled Venetian's catalogue for a new crusade in 1321 implies that guns were unknown in Europe up until this point, further solidifying the 1320 mark, however more evidence in this area may be forthcoming in the future.
The oldest extant cannon in Europe is a small bronze example unearthed in Loshult, Scania in southern Sweden. It dates from the early-mid 14th century, and is currently in the Swedish History Museum in Stockholm.
Early cannons in Europe often shot arrows and were known by an assortment of names such as pot-de-fer, tonnoire, ribaldis, and büszenpyle. The ribaldis, which shot large arrows and simplistic grapeshot, were first mentioned in the English Privy Wardrobe accounts during preparations for the Battle of Crécy, between 1345 and 1346. The Florentine Giovanni Villani recounts their destructiveness, indicating that by the end of the battle, "the whole plain was covered by men struck down by arrows and cannon balls". Similar cannon were also used at the Siege of Calais (1346–47), although it was not until the 1380s that the ribaudekin clearly became mounted on wheels.
The Battle of Crecy which pitted the English against the French in 1346 featured the early use of cannon which helped the longbowmen repulse a large force of Genoese crossbowmen deployed by the French. The English originally intended to use the cannon against cavalry sent to attack their archers, thinking that the loud noises produced by their cannon would panic the advancing horses along with killing the knights atop them.
Early cannons could also be used for more than simply killing men and scaring horses. English cannon were used defensively in 1346 during the Siege of Breteuil to launch fire onto an advancing siege tower. In this way cannons could be used to burn down siege equipment before it reached the fortifications. The use of cannons to shoot fire could also be used offensively as another battle involved the setting of a castle ablaze with similar methods. The particular incendiary used in these projectiles was most likely a gunpowder mixture. This is one area where early Chinese and European cannons share a similarity as both were possibly used to shoot fire.
Another aspect of early European cannons is that they were rather small, dwarfed by the bombards which would come later. In fact, it is possible that the cannons used at Crécy were capable of being moved rather quickly as there is an anonymous chronicle that notes the guns being used to attack the French camp, indicating that they would have been mobile enough to press the attack. These smaller cannons would eventually give way to larger, wall-breaching guns by the end of the 1300s.
There is no clear consensus on when the cannon first appeared in the Islamic world, with dates ranging from 1260 to the mid-14th century. The cannon may have appeared in the Islamic world in the late 13th century, with Ibn Khaldun in the 14th century stating that cannons were used in the Maghreb region of North Africa in 1274, and other Arabic military treatises in the 14th century referring to the use of cannon by Mamluk forces in 1260 and 1303, and by Muslim forces at the 1324 Siege of Huesca in Spain. However, some scholars do not accept these early dates. While the date of its first appearance is not entirely clear, the general consensus among most historians is that there is no doubt the Mamluk forces were using cannon by 1342. Other accounts may have also mentioned the use of cannon in the early 14th century. An Arabic text dating to 1320–1350 describes a type of gunpowder weapon called a midfa which uses gunpowder to shoot projectiles out of a tube at the end of a stock. Some scholars consider this a hand cannon while others dispute this claim. The Nasrid army besieging Elche in 1331 made use of "iron pellets shot with fire".
According to historian Ahmad Y. al-Hassan, during the Battle of Ain Jalut in 1260, the Mamluks used cannon against the Mongols. He claims that this was "the first cannon in history" and used a gunpowder formula almost identical to the ideal composition for explosive gunpowder. He also argues that this was not known in China or Europe until much later. Al-Hassan further claims that the earliest textual evidence of cannon is from the Middle East, based on earlier originals which report hand-held cannons being used by the Mamluks at the Battle of Ain Jalut in 1260. Such an early date is not accepted by some historians, including David Ayalon, Iqtidar Alam Khan, Joseph Needham and Tonio Andrade. Khan argues that it was the Mongols who introduced gunpowder to the Islamic world, and believes cannon only reached Mamluk Egypt in the 1370s. Needham argued that the term midfa, dated to textual sources from 1342 to 1352, did not refer to true hand-guns or bombards, and that contemporary accounts of a metal-barrel cannon in the Islamic world did not occur until 1365. Similarly, Andrade dates the textual appearance of cannons in middle eastern sources to the 1360s. Gabor Ágoston and David Ayalon note that the Mamluks had certainly used siege cannons by 1342 or the 1360s, respectively, but earlier uses of cannons in the Islamic World are vague with a possible appearance in the Emirate of Granada by the 1320s and 1330s, though evidence is inconclusive.
Ibn Khaldun reported the use of cannon as siege machines by the Marinid sultan Abu Yaqub Yusuf at the siege of Sijilmasa in 1274. The passage by Ibn Khaldun on the Marinid Siege of Sijilmassa in 1274 occurs as follows: "[The Sultan] installed siege engines ... and gunpowder engines ..., which project small balls of iron. These balls are ejected from a chamber ... placed in front of a kindling fire of gunpowder; this happens by a strange property which attributes all actions to the power of the Creator." The source is not contemporary and was written a century later around 1382. Its interpretation has been rejected as anachronistic by some historians, who urge caution regarding claims of Islamic firearms use in the 1204–1324 period as late medieval Arabic texts used the same word for gunpowder, naft, as they did for an earlier incendiary, naphtha. Ágoston and Peter Purton note that in the 1204–1324 period, late medieval Arabic texts used the same word for gunpowder, naft, that they used for an earlier incendiary, naphtha. Needham believes Ibn Khaldun was speaking of fire lances rather than hand cannon.
The Ottoman Empire made good use of cannon as siege artillery. Sixty-eight super-sized bombards were used by Mehmed the Conqueror to capture Constantinople in 1453. Jim Bradbury argues that Urban, a Hungarian cannon engineer, introduced this cannon from Central Europe to the Ottoman realm; according to Paul Hammer, however, it could have been introduced from other Islamic countries which had earlier used cannons. These cannon could fire heavy stone balls a mile, and the sound of their blast could reportedly be heard from a distance of 10 miles (16 km). Shkodëran historian Marin Barleti discusses Turkish bombards at length in his book De obsidione Scodrensi (1504), describing the 1478–79 siege of Shkodra in which eleven bombards and two mortars were employed. The Ottomans also used cannon to control passage of ships through the Bosphorus strait. Ottoman cannons also proved effective at stopping crusaders at Varna in 1444 and Kosovo in 1448 despite the presence of European cannon in the former case.
The similar Dardanelles Guns (for the location) were created by Munir Ali in 1464 and were still in use during the Anglo-Turkish War (1807–1809). These were cast in bronze into two parts: the chase (the barrel) and the breech, which combined weighed 18.4 tonnes. The two parts were screwed together using levers to facilitate moving it.
Fathullah Shirazi, a Persian inhabitant of India who worked for Akbar in the Mughal Empire, developed a volley gun in the 16th century.
While there is evidence of cannons in Iran as early as 1405 they were not widespread. This changed following the increased use of firearms by Shah Ismail I, and the Iranian army used 500 cannons by the 1620s, probably captured from the Ottomans or acquired by allies in Europe. By 1443, Iranians were also making some of their own cannon, as Mir Khawand wrote of a 1200 kg metal piece being made by an Iranian rikhtegar which was most likely a cannon. Due to the difficulties of transporting cannon in mountainous terrain, their use was less common compared to their use in Europe.
Documentary evidence of cannons in Russia does not appear until 1382 and they were used only in sieges, often by the defenders. It was not until 1475 when Ivan III established the first Russian cannon foundry in Moscow that they began to produce cannons natively. The earliest surviving cannon from Russia dates to 1485.
Later on large cannons were known as bombards, ranging from three to five feet in length and were used by Dubrovnik and Kotor in defence during the later 14th century. The first bombards were made of iron, but bronze became more prevalent as it was recognized as more stable and capable of propelling stones weighing as much as 45 kilograms (99 lb). Around the same period, the Byzantine Empire began to accumulate its own cannon to face the Ottoman Empire, starting with medium-sized cannon 3 feet (0.91 m) long and of 10 in calibre. The earliest reliable recorded use of artillery in the region was against the Ottoman siege of Constantinople in 1396, forcing the Ottomans to withdraw. The Ottomans acquired their own cannon and laid siege to the Byzantine capital again in 1422. By 1453, the Ottomans used 68 Hungarian-made cannon for the 55-day bombardment of the walls of Constantinople, "hurling the pieces everywhere and killing those who happened to be nearby". The largest of their cannons was the Great Turkish Bombard, which required an operating crew of 200 men and 70 oxen, and 10,000 men to transport it. Gunpowder made the formerly devastating Greek fire obsolete, and with the final fall of Constantinople—which was protected by what were once the strongest walls in Europe—on 29 May 1453, "it was the end of an era in more ways than one".
Cannons were introduced to the Javanese Majapahit Empire when Kublai Khan's Mongol-Chinese army under the leadership of Ike Mese sought to invade Java in 1293. History of Yuan mentioned that the Mongol used a weapon called p'ao against Daha forces. This weapon is interpreted differently by researchers, it may be a trebuchet that throws thunderclap bombs, firearms, cannons, or rockets. It is possible that the gunpowder weapons carried by the Mongol–Chinese troops amounted to more than one type.
Thomas Stamford Raffles wrote in The History of Java that in 1247 saka (1325 AD), cannons were widely used in Java especially by the Majapahit. It is recorded that the small kingdoms in Java that sought the protection of Majapahit had to hand over their cannons to the Majapahit. Majapahit under Mahapatih (prime minister) Gajah Mada (in office 1331–1364) utilized gunpowder technology obtained from Yuan dynasty for use in naval fleet.
Mongol-Chinese gunpowder technology of Yuan dynasty resulted in eastern-style cetbang which is similar to Chinese cannon. Swivel guns however, only developed in the archipelago because of the close maritime relations of the Nusantara archipelago with the territory of West India after 1460 AD, which brought new types of gunpowder weapons to the archipelago, likely through Arab intermediaries. This weapon seems to be cannon and gun of Ottoman tradition, for example the prangi, which is a breech-loading swivel gun. A new type of cetbang, called the western-style cetbang, was derived from the Turkish prangi. Just like prangi, this cetbang is a breech-loading swivel gun made of bronze or iron, firing single rounds or scattershots (a large number of small bullets).
Cannons derived from western-style cetbang can be found in Nusantara, among others were lantaka and lela. Most lantakas were made of bronze and the earliest ones were breech-loaded. There is a trend toward muzzle-loading weapons during colonial times. When the Portuguese came to the archipelago, they referred to the breech-loading swivel gun as berço, while the Spaniards call it verso. A pole gun (bedil tombak) was recorded as being used by Java in 1413.
Duarte Barbosa c. 1514 said that the inhabitants of Java were great masters in casting artillery and very good artillerymen. They made many one-pounder cannon (cetbang or rentaka), long muskets, spingarde (arquebus), schioppi (hand cannon), Greek fire, guns (cannon), and other fireworks. Every place was considered excellent in casting artillery, and in the knowledge of using it. In 1513, the Javanese fleet led by Pati Unus sailed to attack Portuguese Malacca "with much artillery made in Java, for the Javanese are skilled in founding and casting, and in all works in iron, over and above what they have in India". By early 16th century, the Javanese already locally-producing large guns, some of them still survived until the present day and dubbed as "sacred cannon" or "holy cannon". These cannons varied between 180- and 260-pounders, weighing anywhere between 3 and 8 tons, length of them between 3 and 6 m (9.8 and 19.7 ft).
Cannons were used by the Ayutthaya Kingdom in 1352 during its invasion of the Khmer Empire. Within a decade large quantities of gunpowder could be found in the Khmer Empire. By the end of the century firearms were also used by the Trần dynasty.
Saltpeter harvesting was recorded by Dutch and German travelers as being common in even the smallest villages and was collected from the decomposition process of large dung hills specifically piled for the purpose. The Dutch punishment for possession of non-permitted gunpowder appears to have been amputation. Ownership and manufacture of gunpowder was later prohibited by the colonial Dutch occupiers. According to colonel McKenzie quoted in Sir Thomas Stamford Raffles' The History of Java (1817), the purest sulfur was supplied from a crater from a mountain near the straits of Bali.
In Africa, the Adal Sultanate and the Abyssinian Empire both deployed cannons during the Adal-Abyssinian War. Imported from Arabia, and the wider Islamic world, the Adalites led by Ahmed ibn Ibrahim al-Ghazi were the first African power to introduce cannon warfare to the African continent. Later on as the Portuguese Empire entered the war it would supply and train the Abyssinians with cannons, while the Ottoman Empire sent soldiers and cannon to back Adal. The conflict proved, through their use on both sides, the value of firearms such as the matchlock musket, cannon, and the arquebus over traditional weapons.
While previous smaller guns could burn down structures with fire, larger cannons were so effective that engineers were forced to develop stronger castle walls to prevent their keeps from falling. Nonetheless, cannons were used other purposes than battering down walls as fortifications began using cannons as defensive instruments such as an example in India where the fort of Raicher had gun ports built into its walls to accommodate the use of defensive cannons. In The Art of War, Niccolò Machiavelli opined that field artillery forced an army to take up a defensive posture and this opposed a more ideal offensive stance. Machiavelli's concerns can be seen in the criticisms of Portuguese mortars being used in India during the sixteenth century as lack of mobility was one of the key problems with the design. In Russia the early cannons were again placed in forts as a defensive tool. Cannon were also difficult to move around in certain types of terrain with mountains providing a great obstacle for them, for these reasons offensives conducted with cannons would be difficult to pull off in places such as Iran.
By the 16th century, cannons were made in a great variety of lengths and bore diameters, but the general rule was that the longer the barrel, the longer the range. Some cannons made during this time had barrels exceeding 10 ft (3.0 m) in length, and could weigh up to 20,000 pounds (9,100 kg). Consequently, large amounts of gunpowder were needed to allow them to fire stone balls several hundred yards. By mid-century, European monarchs began to classify cannons to reduce the confusion. Henry II of France opted for six sizes of cannon, but others settled for more; the Spanish used twelve sizes, and the English sixteen. They are, from largest to smallest: the cannon royal, cannon, cannon serpentine, bastard cannon, demicannon, pedrero, culverin, basilisk, demiculverin, bastard culverin, saker, minion, falcon, falconet, serpentine, and rabinet. Better powder had been developed by this time as well. Instead of the finely ground powder used by the first bombards, powder was replaced by a "corned" variety of coarse grains. This coarse powder had pockets of air between grains, allowing fire to travel through and ignite the entire charge quickly and uniformly.
The end of the Middle Ages saw the construction of larger, more powerful cannon, as well as their spread throughout the world. As they were not effective at breaching the newer fortifications resulting from the development of cannon, siege engines—such as siege towers and trebuchets—became less widely used. However, wooden "battery-towers" took on a similar role as siege towers in the gunpowder age—such as that used at Siege of Kazan in 1552, which could hold ten large-calibre cannon, in addition to 50 lighter pieces. Another notable effect of cannon on warfare during this period was the change in conventional fortifications. Niccolò Machiavelli wrote, "There is no wall, whatever its thickness that artillery will not destroy in only a few days." Although castles were not immediately made obsolete by cannon, their use and importance on the battlefield rapidly declined. Instead of majestic towers and merlons, the walls of new fortresses were thick, angled, and sloped, while towers became low and stout; increasing use was also made of earth and brick in breastworks and redoubts. These new defences became known as bastion forts, after their characteristic shape which attempted to force any advance towards it directly into the firing line of the guns. A few of these featured cannon batteries, such as the House of Tudor's Device Forts in England. Bastion forts soon replaced castles in Europe and, eventually, those in the Americas as well.
By the end of the 15th century, several technological advancements made cannons more mobile. Wheeled gun carriages and trunnions became common, and the invention of the limber further facilitated transportation. As a result, field artillery became more viable, and began to see more widespread use, often alongside the larger cannons intended for sieges. Better gunpowder, cast-iron projectiles (replacing stone), and the standardisation of calibres meant that even relatively light cannons could be deadly. In The Art of War, Niccolò Machiavelli observed that "It is true that the arquebuses and the small artillery do much more harm than the heavy artillery." This was the case at the Battle of Flodden, in 1513: the English field guns outfired the Scottish siege artillery, firing two or three times as many rounds. Despite the increased maneuverability, however, cannon were still the slowest component of the army: a heavy English cannon required 23 horses to transport, while a culverin needed nine. Even with this many animals pulling, they still moved at a walking pace. Due to their relatively slow speed, and lack of organisation, and undeveloped tactics, the combination of pike and shot still dominated the battlefields of Europe.
Innovations continued, notably the German invention of the mortar, a thick-walled, short-barrelled gun that blasted shot upward at a steep angle. Mortars were useful for sieges, as they could hit targets behind walls or other defences. This cannon found more use with the Dutch, who learnt to shoot bombs filled with powder from them. Setting the bomb fuse was a problem. "Single firing" was first used to ignite the fuse, where the bomb was placed with the fuse down against the cannon's propellant. This often resulted in the fuse being blown into the bomb, causing it to blow up as it left the mortar. Because of this, "double firing" was tried where the gunner lit the fuse and then the touch hole. This required considerable skill and timing, and was especially dangerous if the gun misfired, leaving a lighted bomb in the barrel. Not until 1650 was it accidentally discovered that double-lighting was superfluous as the heat of firing would light the fuse.
Gustavus Adolphus of Sweden emphasised the use of light cannon and mobility in his army, and created new formations and tactics that revolutionised artillery. He discontinued using all 12 pounder—or heavier—cannon as field artillery, preferring, instead, to use cannons that could be handled by only a few men. One obsolete type of gun, the "leatheren", was replaced by 4 pounder and 9 pounder demi-culverins. These could be operated by three men, and pulled by only two horses. Gustavus Adolphus's army was also the first to use a cartridge that contained both powder and shot which sped up reloading, increasing the rate of fire. Finally, against infantry he pioneered the use of canister shot—essentially a tin can filled with musket balls. Until then there was no more than one cannon for every thousand infantrymen on the battlefield but Gustavus Adolphus increased the number of cannons sixfold. Each regiment was assigned two pieces, though he often arranged them into batteries instead of distributing them piecemeal. He used these batteries to break his opponent's infantry line, while his cavalry would outflank their heavy guns.
At the Battle of Breitenfeld, in 1631, Adolphus proved the effectiveness of the changes made to his army, by defeating Johann Tserclaes, Count of Tilly. Although severely outnumbered, the Swedes were able to fire between three and five times as many volleys of artillery, and their infantry's linear formations helped ensure they did not lose any ground. Battered by cannon fire, and low on morale, Tilly's men broke ranks and fled.
In England, cannons were being used to besiege various fortified buildings during the English Civil War. Nathaniel Nye is recorded as testing a Birmingham cannon in 1643 and experimenting with a saker in 1645. From 1645 he was the master gunner to the Parliamentarian garrison at Evesham and in 1646 he successfully directed the artillery at the Siege of Worcester, detailing his experiences and in his 1647 book The Art of Gunnery. Believing that war was as much a science as an art, his explanations focused on triangulation, arithmetic, theoretical mathematics, and cartography as well as practical considerations such as the ideal specification for gunpowder or slow matches. His book acknowledged mathematicians such as Robert Recorde and Marcus Jordanus as well as earlier military writers on artillery such as Niccolò Fontana Tartaglia and Thomas (or Francis) Malthus (author of A Treatise on Artificial Fire-Works).
Around this time also came the idea of aiming the cannon to hit a target. Gunners controlled the range of their cannons by measuring the angle of elevation, using a "gunner's quadrant". Cannons did not have sights; therefore, even with measuring tools, aiming was still largely guesswork.
In the latter half of the 17th century, the French engineer Sébastien Le Prestre de Vauban introduced a more systematic and scientific approach to attacking gunpowder fortresses, in a time when many field commanders "were notorious dunces in siegecraft". Careful sapping forward, supported by enfilading ricochets, was a key feature of this system, and it even allowed Vauban to calculate the length of time a siege would take. He was also a prolific builder of bastion forts, and did much to popularize the idea of "depth in defence" in the face of cannon. These principles were followed into the mid-19th century, when changes in armaments necessitated greater depth defence than Vauban had provided for. It was only in the years prior to World War I that new works began to break radically away from his designs.
The lower tier of 17th-century English ships of the line were usually equipped with demi-cannons, guns that fired a 32-pound (15 kg) solid shot, and could weigh up to 3,400 pounds (1,500 kg). Demi-cannons were capable of firing these heavy metal balls with such force that they could penetrate more than a metre of solid oak, from a distance of 90 m (300 ft), and could dismast even the largest ships at close range. Full cannon fired a 42-pound (19 kg) shot, but were discontinued by the 18th century, as they were too unwieldy. By the end of the 18th century, principles long adopted in Europe specified the characteristics of the Royal Navy's cannon, as well as the acceptable defects, and their severity. The United States Navy tested guns by measuring them, firing them two or three times—termed "proof by powder"—and using pressurized water to detect leaks.
The carronade was adopted by the Royal Navy in 1779; the lower muzzle velocity of the round shot when fired from this cannon was intended to create more wooden splinters when hitting the structure of an enemy vessel, as they were believed to be more deadly than the ball by itself. The carronade was much shorter, and weighed between a third to a quarter of the equivalent long gun; for example, a 32-pounder carronade weighed less than a ton, compared with a 32-pounder long gun, which weighed over 3 tons. The guns were, therefore, easier to handle, and also required less than half as much gunpowder, allowing fewer men to crew them. Carronades were manufactured in the usual naval gun calibres, but were not counted in a ship of the line's rated number of guns. As a result, the classification of Royal Navy vessels in this period can be misleading, as they often carried more cannons than were listed.
Cannons were crucial in Napoleon's rise to power, and continued to play an important role in his army in later years. During the French Revolution, the unpopularity of the Directory led to riots and rebellions. When over 25,000 royalists led by General Danican assaulted Paris, Paul Barras was appointed to defend the capital; outnumbered five to one and disorganised, the Republicans were desperate. When Napoleon arrived, he reorganised the defences but realised that without cannons the city could not be held. He ordered Joachim Murat to bring the guns from the Sablons artillery park; the Major and his cavalry fought their way to the recently captured cannons, and brought them back to Napoleon. When Danican's poorly trained men attacked, on 13 Vendémiaire 1795 (5 October in the calendar used in France at the time), Napoleon ordered his cannon to fire grapeshot into the mob, an act that became known as the "whiff of grapeshot". The slaughter effectively ended the threat to the new government, while, at the same time, making Bonaparte a famous—and popular—public figure. Among the first generals to recognise that artillery was not being used to its full potential, Napoleon often massed his cannon into batteries and introduced several changes into the French artillery, improving it significantly and making it among the finest in Europe. Such tactics were successfully used by the French, for example, at the Battle of Friedland, when 66 guns fired a total of 3,000 roundshot and 500 rounds of grapeshot, inflicting severe casualties to the Russian forces, whose losses numbered over 20,000 killed and wounded, in total. At the Battle of Waterloo—Napoleon's final battle—the French army had many more artillery pieces than either the British or Prussians. As the battlefield was muddy, recoil caused cannons to bury themselves into the ground after firing, resulting in slow rates of fire, as more effort was required to move them back into an adequate firing position; also, roundshot did not ricochet with as much force from the wet earth. Despite the drawbacks, sustained artillery fire proved deadly during the engagement, especially during the French cavalry attack. The British infantry, having formed infantry squares, took heavy losses from the French guns, while their own cannons fired at the cuirassiers and lancers, when they fell back to regroup. Eventually, the French ceased their assault, after taking heavy losses from the British cannon and musket fire.
In the 1810s and 1820s, greater emphasis was placed on the accuracy of long-range gunfire, and less on the weight of a broadside. Around 1822, George Marshall wrote Marshall's Practical Marine Gunnery. The book was used by cannon operators in the United States Navy throughout the 19th century. It listed all the types of cannons and instructions.
The carronade, although initially very successful and widely adopted, disappeared from the Royal Navy in the 1850s after the development of wrought-iron-jacketed steel cannon by William Armstrong and Joseph Whitworth. Nevertheless, carronades were used in the American Civil War.
Western cannons during the 19th century became larger, more destructive, more accurate, and could fire at longer range. One example is the American 3-inch (76 mm) wrought-iron, muzzle-loading rifle, or Griffen gun (usually called the 3-inch Ordnance Rifle), used during the American Civil War, which had an effective range of over 1.1 mi (1.8 km). Another is the smoothbore 12-pounder Napoleon, which originated in France in 1853 and was widely used by both sides in the American Civil War. This cannon was renowned for its sturdiness, reliability, firepower, flexibility, relatively lightweight, and range of 1,700 m (5,600 ft).
The practice of rifling—casting spiralling lines inside the cannon's barrel—was applied to artillery more frequently by 1855, as it gave cannon projectiles gyroscopic stability, which improved their accuracy. One of the earliest rifled cannons was the breech-loading Armstrong Gun—also invented by William Armstrong—which boasted significantly improved range, accuracy, and power than earlier weapons. The projectile fired from the Armstrong gun could reportedly pierce through a ship's side and explode inside the enemy vessel, causing increased damage and casualties. The British military adopted the Armstrong gun, and was impressed; the Duke of Cambridge even declared that it "could do everything but speak". Despite being significantly more advanced than its predecessors, the Armstrong gun was rejected soon after its integration, in favour of the muzzle-loading pieces that had been in use before. While both types of gun were effective against wooden ships, neither had the capability to pierce the armour of ironclads; due to reports of slight problems with the breeches of the Armstrong gun, and their higher cost, the older muzzle-loaders were selected to remain in service instead. Realising that iron was more difficult to pierce with breech-loaded cannons, Armstrong designed rifled muzzle-loading guns, which proved successful; The Times reported: "even the fondest believers in the invulnerability of our present ironclads were obliged to confess that against such artillery, at such ranges, their plates and sides were almost as penetrable as wooden ships."
The superior cannon of the Western world brought them tremendous advantages in warfare. For example, in the First Opium War in China, during the 19th century, British battleships bombarded the coastal areas and fortifications from afar, safe from the reach of the Chinese cannons. Similarly, the shortest war in recorded history, the Anglo-Zanzibar War of 1896, was brought to a swift conclusion by shelling from British cruisers. The cynical attitude towards recruited infantry in the face of ever more powerful field artillery is the source of the term cannon fodder, first used by François-René de Chateaubriand, in 1814; however, the concept of regarding soldiers as nothing more than "food for powder" was mentioned by William Shakespeare as early as 1598, in Henry IV, Part 1.
Cannons in the 20th and 21st centuries are usually divided into sub-categories and given separate names. Some of the most widely used types of modern cannon are howitzers, mortars, guns, and autocannon, although a few very large-calibre cannon, custom-designed, have also been constructed. Nuclear artillery was experimented with, but was abandoned as impractical. Modern artillery is used in a variety of roles, depending on its type. According to NATO, the general role of artillery is to provide fire support, which is defined as "the application of fire, coordinated with the manoeuvre of forces to destroy, neutralize, or suppress the enemy".
When referring to cannons, the term gun is often used incorrectly. In military usage, a gun is a cannon with a high muzzle velocity and a flat trajectory, useful for hitting the sides of targets such as walls, as opposed to howitzers or mortars, which have lower muzzle velocities, and fire indirectly, lobbing shells up and over obstacles to hit the target from above.
By the early 20th century, infantry weapons had become more powerful, forcing most artillery away from the front lines. Despite the change to indirect fire, cannons proved highly effective during World War I, directly or indirectly causing over 75% of casualties. The onset of trench warfare after the first few months of World War I greatly increased the demand for howitzers, as they were more suited at hitting targets in trenches. Furthermore, their shells carried more explosives than those of guns, and caused considerably less barrel wear. The German army had the advantage here as they began the war with many more howitzers than the French. World War I also saw the use of the Paris Gun, the longest-ranged gun ever fired. This 200 mm (8 in) calibre gun was used by the Germans against Paris and could hit targets more than 122 km (76 mi) away.
The Second World War sparked new developments in cannon technology. Among them were sabot rounds, hollow-charge projectiles, and proximity fuses, all of which increased the effectiveness of cannon against specific target. The proximity fuse emerged on the battlefields of Europe in late December 1944. Used to great effect in anti-aircraft projectiles, proximity fuses were fielded in both the European and Pacific Theatres of Operations; they were particularly useful against V-1 flying bombs and kamikaze planes. Although widely used in naval warfare, and in anti-air guns, both the British and Americans feared unexploded proximity fuses would be reverse engineered, leading to them limiting their use in continental battles. During the Battle of the Bulge, however, the fuses became known as the American artillery's "Christmas present" for the German army because of their effectiveness against German personnel in the open, when they frequently dispersed attacks. Anti-tank guns were also tremendously improved during the war: in 1939, the British used primarily 2 pounder and 6 pounder guns. By the end of the war, 17 pounders had proven much more effective against German tanks, and 32 pounders had entered development. Meanwhile, German tanks were continuously upgraded with better main guns, in addition to other improvements. For example, the Panzer III was originally designed with a 37 mm gun, but was mass-produced with a 50 mm cannon. To counter the threat of the Russian T-34s, another, more powerful 50 mm gun was introduced, only to give way to a larger 75 mm cannon, which was in a fixed mount as the StuG III, the most-produced German World War II armoured fighting vehicle of any type. Despite the improved guns, production of the Panzer III was ended in 1943, as the tank still could not match the T-34, and was replaced by the Panzer IV and Panther tanks. In 1944, the 8.8 cm KwK 43 and many variations, entered service with the Wehrmacht, and was used as both a tank main gun, and as the PaK 43 anti-tank gun. One of the most powerful guns to see service in World War II, it was capable of destroying any Allied tank at very long ranges.
Despite being designed to fire at trajectories with a steep angle of descent, howitzers can be fired directly, as was done by the 11th Marine Regiment at the Battle of Chosin Reservoir, during the Korean War. Two field batteries fired directly upon a battalion of Chinese infantry; the Marines were forced to brace themselves against their howitzers, as they had no time to dig them in. The Chinese infantry took heavy casualties, and were forced to retreat.
The tendency to create larger calibre cannons during the World Wars has reversed since. The United States Army, for example, sought a lighter, more versatile howitzer, to replace their ageing pieces. As it could be towed, the M198 was selected to be the successor to the World War II–era cannons used at the time, and entered service in 1979. Still in use today, the M198 is, in turn, being slowly replaced by the M777 Ultralightweight howitzer, which weighs nearly half as much and can be more easily moved. Although land-based artillery such as the M198 are powerful, long-ranged, and accurate, naval guns have not been neglected, despite being much smaller than in the past, and, in some cases, having been replaced by cruise missiles. However, the Zumwalt-class destroyer's planned armament included the Advanced Gun System (AGS), a pair of 155 mm guns, which fire the Long Range Land-Attack Projectile. The warhead, which weighted 24 pounds (11 kg), had a circular error of probability of 50 m (160 ft), and was mounted on a rocket, to increase the effective range to 100 nmi (190 km), further than that of the Paris Gun. The AGS's barrels would be water cooled, and fire 10 rounds per minute, per gun. The combined firepower from both turrets would give a Zumwalt-class destroyer the firepower equivalent to 12 conventional M198 howitzers. The reason for the re-integration of cannons as a main armament in United States Navy ships was because satellite-guided munitions fired from a gun would be less expensive than a cruise missile but have a similar guidance capability.
Autocannons have an automatic firing mode, similar to that of a machine gun. They have mechanisms to automatically load their ammunition, and therefore have a higher rate of fire than artillery, often approaching, or, in the case of rotary autocannons, even surpassing the firing rate of a machine gun. While there is no minimum bore for autocannons, they are generally larger than machine guns, typically 20 mm or greater since World War II and are usually capable of using explosive ammunition even if it is not always used. Machine guns in contrast are usually too small to use explosive ammunition; such ammunition is additionally banned in international conflict for the parties to the Saint Petersburg Declaration of 1868.
Most nations use rapid-fire cannon on light vehicles, replacing a more powerful, but heavier, tank gun. A typical autocannon is the 25 mm "Bushmaster" chain gun, mounted on the LAV-25 and M2 Bradley armoured vehicles. Autocannons may be capable of a very high rate of fire, but ammunition is heavy and bulky, limiting the amount carried. For this reason, both the 25 mm Bushmaster and the 30 mm RARDEN are deliberately designed with relatively low rates of fire. The typical rate of fire for a modern autocannon ranges from 90 to 1,800 rounds per minute. Systems with multiple barrels, such as a rotary autocannon, can have rates of fire of more than several thousand rounds per minute. The fastest of these is the GSh-6-23, which has a rate of fire of over 10,000 rounds per minute.
Autocannons are often found in aircraft, where they replaced machine guns and as shipboard anti-aircraft weapons, as they provide greater destructive power than machine guns.
The first documented installation of a cannon on an aircraft was on the Voisin Canon in 1911, displayed at the Paris Exposition that year. By World War I, all of the major powers were experimenting with aircraft-mounted cannons; however their low rate of fire and great size and weight precluded any of them from being anything other than experimental. The most successful (or least unsuccessful) was the SPAD 12 Ca.1 with a single 37mm Puteaux mounted to fire between the cylinder banks and through the propeller boss of the aircraft's Hispano-Suiza 8C. The pilot (by necessity an ace) had to manually reload each round.
The first autocannon were developed during World War I as anti-aircraft guns, and one of these, the Coventry Ordnance Works "COW 37 mm gun", was installed in an aircraft. However, the war ended before it could be given a field trial, and it never became standard equipment in a production aircraft. Later trials had it fixed at a steep angle upwards in both the Vickers Type 161 and the Westland C.O.W. Gun Fighter, an idea that would return later.
During this period autocannons became available and several fighters of the German Luftwaffe and the Imperial Japanese Navy Air Service were fitted with 20 mm cannons. They continued to be installed as an adjunct to machine guns rather than as a replacement, as the rate of fire was still too low and the complete installation too heavy. There was a some debate in the RAF as to whether the greater number of possible rounds being fired from a machine gun, or a smaller number of explosive rounds from a cannon was preferable. Improvements during the war in regards to rate of fire allowed the cannon to displace the machine gun almost entirely. The cannon was more effective against armour so they were increasingly used during the course of World War II, and newer fighters such as the Hawker Tempest usually carried two or four versus the six .50 Browning machine guns for US aircraft or eight to twelve M1919 Browning machine guns on earlier British aircraft. The Hispano-Suiza HS.404, Oerlikon 20 mm cannon, MG FF, and their numerous variants became among the most widely used autocannon in the war. Cannons, as with machine guns, were generally fixed to fire forwards (mounted in the wings, in the nose or fuselage, or in a pannier under either); or were mounted in gun turrets on heavier aircraft. Both the Germans and Japanese mounted cannons to fire upwards and forwards for use against heavy bombers, with the Germans calling guns so-installed Schräge Musik. This term derives from a German colloquialism for jazz music (schräg means "off-key").
Preceding the Vietnam War the high speeds aircraft were attaining led to a move to remove the cannon due to the mistaken belief that they would be useless in a dogfight, but combat experience during the Vietnam War showed conclusively that despite advances in missiles, there was still a need for them. Nearly all modern fighter aircraft are armed with an autocannon and they are also commonly found on ground-attack aircraft. One of the most powerful examples is the 30mm GAU-8/A Avenger Gatling-type rotary cannon, mounted exclusively on the Fairchild Republic A-10 Thunderbolt II. The Lockheed AC-130 gunship (a converted transport) can carry a 105 mm howitzer as well as a variety of autocannons ranging up to 40 mm. Both are used in the close air support role.
Cannons in general have the form of a truncated cone with an internal cylindrical bore for holding an explosive charge and a projectile. The thickest, strongest, and closed part of the cone is located near the explosive charge. As any explosive charge will dissipate in all directions equally, the thickest portion of the cannon is useful for containing and directing this force. The backward motion of the cannon as its projectile leaves the bore is termed its recoil, and the effectiveness of the cannon can be measured in terms of how much this response can be diminished, though obviously diminishing recoil through increasing the overall mass of the cannon means decreased mobility.
Field artillery cannon in Europe and the Americas were initially made most often of bronze, though later forms were constructed of cast iron and eventually steel. Bronze has several characteristics that made it preferable as a construction material: although it is relatively expensive, does not always alloy well, and can result in a final product that is "spongy about the bore", bronze is more flexible than iron and therefore less prone to bursting when exposed to high pressure; cast-iron cannon are less expensive and more durable generally than bronze and withstand being fired more times without deteriorating. However, cast-iron cannon have a tendency to burst without having shown any previous weakness or wear, and this makes them more dangerous to operate.
The older and more-stable forms of cannon were muzzle-loading as opposed to breech-loading—to be used they had to have their ordnance packed down the bore through the muzzle rather than inserted through the breech.
The following terms refer to the components or aspects of a classical western cannon (c. 1850) as illustrated here. In what follows, the words near, close, and behind will refer to those parts towards the thick, closed end of the piece, and far, front, in front of, and before to the thinner, open end.
The main body of a cannon consists of three basic extensions: the foremost and the longest is called the chase, the middle portion is the reinforce, and the closest and briefest portion is the cascabel or cascable.
To pack a muzzle-loading cannon, first gunpowder is poured down the bore. This is followed by a layer of wadding (often nothing more than paper), and then the cannonball itself. A certain amount of windage allows the ball to fit down the bore, though the greater the windage the less efficient the propulsion of the ball when the gunpowder is ignited. To fire the cannon, the fuse located in the vent is lit, quickly burning down to the gunpowder, which then explodes violently, propelling wadding and ball down the bore and out of the muzzle. A small portion of exploding gas also escapes through the vent, but this does not dramatically affect the total force exerted on the ball.
Any large, smoothbore, muzzle-loading gun—used before the advent of breech-loading, rifled guns—may be referred to as a cannon, though once standardised names were assigned to different-sized cannon, the term specifically referred to a gun designed to fire a 42-pound (19 kg) shot, as distinct from a demi-cannon – 32 pounds (15 kg), culverin – 18 pounds (8.2 kg), or demi-culverin – 9 pounds (4.1 kg). Gun specifically refers to a type of cannon that fires projectiles at high speeds, and usually at relatively low angles; they have been used in warships, and as field artillery. The term cannon is also used for autocannon, a modern repeating weapon firing explosive projectiles. Cannon have been used extensively in fighter aircraft since World War II.
In the 1770s, cannon operation worked as follows: each cannon would be manned by two gunners, six soldiers, and four officers of artillery. The right gunner was to prime the piece and load it with powder, and the left gunner would fetch the powder from the magazine and be ready to fire the cannon at the officer's command. On each side of the cannon, three soldiers stood, to ram and sponge the cannon, and hold the ladle. The second soldier on the left was tasked with providing 50 bullets.
Before loading, the cannon would be cleaned with a wet sponge to extinguish any smouldering material from the last shot. Fresh powder could be set off prematurely by lingering ignition sources. The powder was added, followed by wadding of paper or hay, and the ball was placed in and rammed down. After ramming, the cannon would be aimed with the elevation set using a quadrant and a plummet. At 45 degrees, the ball had the utmost range: about ten times the gun's level range. Any angle above a horizontal line was called random-shot. Wet sponges were used to cool the pieces every ten or twelve rounds.
During the Napoleonic Wars, a British gun team consisted of five gunners to aim it, clean the bore with a damp sponge to quench any remaining embers before a fresh charge was introduced, and another to load the gun with a bag of powder and then the projectile. The fourth gunner pressed his thumb on the vent hole, to prevent a draught that might fan a flame. The charge loaded, the fourth would prick the bagged charge through the vent hole, and fill the vent with powder. On command, the fifth gunner would fire the piece with a slow match. Friction primers replaced slow match ignition by the mid-19th century.
When a cannon had to be abandoned such as in a retreat or surrender, the touch hole of the cannon would be plugged flush with an iron spike, disabling the cannon (at least until metal boring tools could be used to remove the plug). This was called "spiking".
A gun was said to be honeycombed when the surface of the bore had cavities, or holes in it, caused by corrosion or casting defects.
In the United States, muzzleloading cannons made before 1899 (and replicas) that are unable to fire fixed ammunition are considered antiques. They are not subject to the Gun Control Act of 1968 or National Firearms Act of 1934. They may be subject to local rules in some jurisdictions, however.
Historically, logs or poles have been used as decoys to mislead the enemy as to the strength of an emplacement. The "Quaker Gun trick" was used by Colonel William Washington's Continental Army during the American Revolutionary War; in 1780, approximately 100 Loyalists surrendered to them, rather than face bombardment. During the American Civil War, Quaker guns were also used by the Confederates, to compensate for their shortage of artillery. The decoy cannon were painted black at the "muzzle", and positioned behind fortifications to delay Union attacks on those positions. On occasion, real gun carriages were used to complete the deception.
Cannon sounds have sometimes been used in classical pieces with a military theme. One of the best known examples is Pyotr Ilyich Tchaikovsky's 1812 Overture. The overture is to be performed using an artillery section together with the orchestra, resulting in noise levels high enough that musicians are required to wear ear protection. The cannon fire simulates Russian artillery bombardments of the Battle of Borodino, a critical battle in Napoleon's invasion of Russia, whose defeat the piece celebrates. When the overture was first performed, the cannon were fired by an electric current triggered by the conductor. However, the overture was not recorded with real cannon fire until Mercury Records and conductor Antal Doráti's 1958 recording of the Minnesota Orchestra. Cannon fire is also frequently used in presentations of the 1812 on the American Independence Day, a tradition started by Arthur Fiedler of the Boston Pops in 1974.
The hard rock band AC/DC used cannon in their song "For Those About to Rock (We Salute You)", and in live shows replica Napoleonic cannon and pyrotechnics were used to perform the piece. A recording of that song has accompanied the firing of an authentic reproduction of a M1857 12-pounder Napoleon during Columbus Blue Jackets goal celebrations at Nationwide Arena since opening night of the 2007–08 season. The cannon is the focal point of the team's alternate logo on its third jerseys.
Cannons have been fired in touchdown celebrations by several American football teams including the San Diego Chargers. The Pittsburgh Steelers used one only during the 1962 campaign but discontinued it after Buddy Dial was startled by inadvertently running face-first into the cannon's smoky discharge in a 42–27 loss to the Dallas Cowboys.
Cannon recovered from the sea are often extensively damaged from exposure to salt water; electrolytic reduction treatment is required to forestall corrosion. The cannon is then washed in deionized water to remove the electrolyte, and is treated in tannic acid, which prevents further rust and gives the metal a bluish-black colour. Cannon on display may be protected from oxygen and moisture by a wax sealant. A coat of polyurethane may also be painted over the wax sealant, to prevent the cannon from attracting dust. | [
{
"paragraph_id": 0,
"text": "A cannon is a large-caliber gun classified as a type of artillery, which usually launches a projectile using explosive chemical propellant. Gunpowder (\"black powder\") was the primary propellant before the invention of smokeless powder during the late 19th century. Cannons vary in gauge, effective range, mobility, rate of fire, angle of fire and firepower; different forms of cannon combine and balance these attributes in varying degrees, depending on their intended use on the battlefield. A cannon is a type of heavy artillery weapon.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The word cannon is derived from several languages, in which the original definition can usually be translated as tube, cane, or reed. In the modern era, the term cannon has fallen into decline, replaced by guns or artillery, if not a more specific term such as howitzer or mortar, except for high-caliber automatic weapons firing bigger rounds than machine guns, called autocannons.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The earliest known depiction of cannons appeared in Song dynasty China as early as the 12th century; however, solid archaeological and documentary evidence of cannons do not appear until the 13th century. In 1288, Yuan dynasty troops are recorded to have used hand cannon in combat, and the earliest extant cannon bearing a date of production comes from the same period. By the early 14th century, possible mentions of cannon had appeared in the Middle East and the depiction of one in Europe by 1326. Recorded usage of cannon began appearing almost immediately after. They subsequently spread to India, their usage on the subcontinent being first attested to in 1366. By the end of the 14th century, cannons were widespread throughout Eurasia.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Cannons were used primarily as anti-infantry weapons until around 1374, when large cannons were recorded to have breached walls for the first time in Europe. Cannons featured prominently as siege weapons, and ever larger pieces appeared. In 1464 a 16,000 kg (35,000 lb) cannon known as the Great Turkish Bombard was created in the Ottoman Empire. Cannons as field artillery became more important after 1453, with the introduction of limber, which greatly improved cannon maneuverability and mobility. European cannons reached their longer, lighter, more accurate, and more efficient \"classic form\" around 1480. This classic European cannon design stayed relatively consistent in form with minor changes until the 1750s.",
"title": ""
},
{
"paragraph_id": 4,
"text": "The word cannon is derived from the Old Italian word cannone, meaning \"large tube\", which came from the Latin canna, in turn originating from the Greek κάννα (kanna), \"reed\", and then generalised to mean any hollow tube-like object; cognate with the Akkadian qanu(m) and the Hebrew qāneh, \"tube, reed\". The word has been used to refer to a gun since 1326 in Italy, and 1418 in England. Both of the plural forms cannons and cannon are correct.",
"title": "Etymology and terminology"
},
{
"paragraph_id": 5,
"text": "The cannon may have appeared as early as the 12th century in China, and was probably a parallel development or evolution of the fire-lance, a short ranged anti-personnel weapon combining a gunpowder-filled tube and a polearm of some sort. Co-viative projectiles such as iron scraps or porcelain shards were placed in fire lance barrels at some point, and eventually, the paper and bamboo materials of fire lance barrels were replaced by metal.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "The earliest known depiction of a cannon is a sculpture from the Dazu Rock Carvings in Sichuan dated to 1128, however, the earliest archaeological samples and textual accounts do not appear until the 13th century. The primary extant specimens of cannon from the 13th century are the Wuwei Bronze Cannon dated to 1227, the Heilongjiang hand cannon dated to 1288, and the Xanadu Gun dated to 1298. However, only the Xanadu gun contains an inscription bearing a date of production, so it is considered the earliest confirmed extant cannon. The Xanadu Gun is 34.7 cm in length and weighs 6.2 kg. The other cannons are dated using contextual evidence. The Heilongjiang hand cannon is also often considered by some to be the oldest firearm since it was unearthed near the area where the History of Yuan reports a battle took place involving hand cannons. According to the History of Yuan, in 1288, a Jurchen commander by the name of Li Ting led troops armed with hand cannons into battle against the rebel prince Nayan.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "Chen Bingying argues there were no guns before 1259, while Dang Shoushan believes the Wuwei gun and other Western Xia era samples point to the appearance of guns by 1220, and Stephen Haw goes even further by stating that guns were developed as early as 1200. Sinologist Joseph Needham and renaissance siege expert Thomas Arnold provide a more conservative estimate of around 1280 for the appearance of the \"true\" cannon. Whether or not any of these are correct, it seems likely that the gun was born sometime during the 13th century.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "References to cannons proliferated throughout China in the following centuries. Cannon featured in literary pieces. In 1341 Xian Zhang wrote a poem called The Iron Cannon Affair describing a cannonball fired from an eruptor which could \"pierce the heart or belly when striking a man or horse, and even transfix several persons at once.\" By the 1350s the cannon was used extensively in Chinese warfare. In 1358 the Ming army failed to take a city due to its garrisons' usage of cannon, however, they themselves would use cannon, in the thousands, later on during the siege of Suzhou in 1366.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "The Mongol invasion of Java in 1293 brought gunpowder technology to the Nusantara archipelago in the form of cannon (Chinese: Pao). During the Ming dynasty cannons were used in riverine warfare at the Battle of Lake Poyang. One shipwreck in Shandong had a cannon dated to 1377 and an anchor dated to 1372. From the 13th to 15th centuries cannon-armed Chinese ships also travelled throughout Southeast Asia. Cannon appeared in Đại Việt by 1390 at the latest.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "The first of the western cannon to be introduced were breech-loaders in the early 16th century, which the Chinese began producing themselves by 1523 and improved on by including composite metal construction in their making.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "Japan did not acquire cannon until 1510 when a monk brought one back from China, and did not produce any in appreciable numbers. During the 1593 Siege of Pyongyang, 40,000 Ming troops deployed a variety of cannons against Japanese troops. Despite their defensive advantage and the use of arquebus by Japanese soldiers, the Japanese were at a severe disadvantage due to their lack of cannon. Throughout the Japanese invasions of Korea (1592–1598), the Ming–Joseon coalition used artillery widely in land and naval battles, including on the turtle ships of Yi Sun-sin.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "According to Ivan Petlin, the first Russian envoy to Beijing, in September 1619, the city was armed with large cannon with cannonballs weighing more than 30 kg (66 lb). His general observation was that the Chinese were militarily capable and had firearms:",
"title": "History"
},
{
"paragraph_id": 13,
"text": "There are many merchants and military persons in the Chinese Empire. They have firearms, and the Chinese are very skillful in military affairs. They go into battle against the Yellow Mongols who fight with bows and arrows.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "Outside of China, the earliest texts to mention gunpowder are Roger Bacon's Opus Majus (1267) and Opus Tertium in what has been interpreted as references to firecrackers. In the early 20th century, a British artillery officer proposed that another work tentatively attributed to Bacon, Epistola de Secretis Operibus Artis et Naturae, et de Nullitate Magiae, dated to 1247, contained an encrypted formula for gunpowder hidden in the text. These claims have been disputed by science historians. In any case, the formula itself is not useful for firearms or even firecrackers, burning slowly and producing mostly smoke.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "There is a record of a gun in Europe dating to 1322 being discovered in the nineteenth century but the artifact has since been lost. The earliest known European depiction of a gun appeared in 1326 in a manuscript by Walter de Milemete, although not necessarily drawn by him, known as De Nobilitatibus, sapientii et prudentiis regum (Concerning the Majesty, Wisdom, and Prudence of Kings), which displays a gun with a large arrow emerging from it and its user lowering a long stick to ignite the gun through the touch hole. In the same year, another similar illustration showed a darker gun being set off by a group of knights, which also featured in another work of de Milemete's, De secretis secretorum Aristotelis. On 11 February of that same year, the Signoria of Florence appointed two officers to obtain canones de mettallo and ammunition for the town's defense. In the following year a document from the Turin area recorded a certain amount was paid \"for the making of a certain instrument or device made by Friar Marcello for the projection of pellets of lead\". A reference from 1331 describes an attack mounted by two Germanic knights on Cividale del Friuli, using man-portable gunpowder weapons of some sort. The 1320s seem to have been the takeoff point for guns in Europe according to most modern military historians. Scholars suggest that the lack of gunpowder weapons in a well-traveled Venetian's catalogue for a new crusade in 1321 implies that guns were unknown in Europe up until this point, further solidifying the 1320 mark, however more evidence in this area may be forthcoming in the future.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "The oldest extant cannon in Europe is a small bronze example unearthed in Loshult, Scania in southern Sweden. It dates from the early-mid 14th century, and is currently in the Swedish History Museum in Stockholm.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "Early cannons in Europe often shot arrows and were known by an assortment of names such as pot-de-fer, tonnoire, ribaldis, and büszenpyle. The ribaldis, which shot large arrows and simplistic grapeshot, were first mentioned in the English Privy Wardrobe accounts during preparations for the Battle of Crécy, between 1345 and 1346. The Florentine Giovanni Villani recounts their destructiveness, indicating that by the end of the battle, \"the whole plain was covered by men struck down by arrows and cannon balls\". Similar cannon were also used at the Siege of Calais (1346–47), although it was not until the 1380s that the ribaudekin clearly became mounted on wheels.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "The Battle of Crecy which pitted the English against the French in 1346 featured the early use of cannon which helped the longbowmen repulse a large force of Genoese crossbowmen deployed by the French. The English originally intended to use the cannon against cavalry sent to attack their archers, thinking that the loud noises produced by their cannon would panic the advancing horses along with killing the knights atop them.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "Early cannons could also be used for more than simply killing men and scaring horses. English cannon were used defensively in 1346 during the Siege of Breteuil to launch fire onto an advancing siege tower. In this way cannons could be used to burn down siege equipment before it reached the fortifications. The use of cannons to shoot fire could also be used offensively as another battle involved the setting of a castle ablaze with similar methods. The particular incendiary used in these projectiles was most likely a gunpowder mixture. This is one area where early Chinese and European cannons share a similarity as both were possibly used to shoot fire.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "Another aspect of early European cannons is that they were rather small, dwarfed by the bombards which would come later. In fact, it is possible that the cannons used at Crécy were capable of being moved rather quickly as there is an anonymous chronicle that notes the guns being used to attack the French camp, indicating that they would have been mobile enough to press the attack. These smaller cannons would eventually give way to larger, wall-breaching guns by the end of the 1300s.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "There is no clear consensus on when the cannon first appeared in the Islamic world, with dates ranging from 1260 to the mid-14th century. The cannon may have appeared in the Islamic world in the late 13th century, with Ibn Khaldun in the 14th century stating that cannons were used in the Maghreb region of North Africa in 1274, and other Arabic military treatises in the 14th century referring to the use of cannon by Mamluk forces in 1260 and 1303, and by Muslim forces at the 1324 Siege of Huesca in Spain. However, some scholars do not accept these early dates. While the date of its first appearance is not entirely clear, the general consensus among most historians is that there is no doubt the Mamluk forces were using cannon by 1342. Other accounts may have also mentioned the use of cannon in the early 14th century. An Arabic text dating to 1320–1350 describes a type of gunpowder weapon called a midfa which uses gunpowder to shoot projectiles out of a tube at the end of a stock. Some scholars consider this a hand cannon while others dispute this claim. The Nasrid army besieging Elche in 1331 made use of \"iron pellets shot with fire\".",
"title": "History"
},
{
"paragraph_id": 22,
"text": "According to historian Ahmad Y. al-Hassan, during the Battle of Ain Jalut in 1260, the Mamluks used cannon against the Mongols. He claims that this was \"the first cannon in history\" and used a gunpowder formula almost identical to the ideal composition for explosive gunpowder. He also argues that this was not known in China or Europe until much later. Al-Hassan further claims that the earliest textual evidence of cannon is from the Middle East, based on earlier originals which report hand-held cannons being used by the Mamluks at the Battle of Ain Jalut in 1260. Such an early date is not accepted by some historians, including David Ayalon, Iqtidar Alam Khan, Joseph Needham and Tonio Andrade. Khan argues that it was the Mongols who introduced gunpowder to the Islamic world, and believes cannon only reached Mamluk Egypt in the 1370s. Needham argued that the term midfa, dated to textual sources from 1342 to 1352, did not refer to true hand-guns or bombards, and that contemporary accounts of a metal-barrel cannon in the Islamic world did not occur until 1365. Similarly, Andrade dates the textual appearance of cannons in middle eastern sources to the 1360s. Gabor Ágoston and David Ayalon note that the Mamluks had certainly used siege cannons by 1342 or the 1360s, respectively, but earlier uses of cannons in the Islamic World are vague with a possible appearance in the Emirate of Granada by the 1320s and 1330s, though evidence is inconclusive.",
"title": "History"
},
{
"paragraph_id": 23,
"text": "Ibn Khaldun reported the use of cannon as siege machines by the Marinid sultan Abu Yaqub Yusuf at the siege of Sijilmasa in 1274. The passage by Ibn Khaldun on the Marinid Siege of Sijilmassa in 1274 occurs as follows: \"[The Sultan] installed siege engines ... and gunpowder engines ..., which project small balls of iron. These balls are ejected from a chamber ... placed in front of a kindling fire of gunpowder; this happens by a strange property which attributes all actions to the power of the Creator.\" The source is not contemporary and was written a century later around 1382. Its interpretation has been rejected as anachronistic by some historians, who urge caution regarding claims of Islamic firearms use in the 1204–1324 period as late medieval Arabic texts used the same word for gunpowder, naft, as they did for an earlier incendiary, naphtha. Ágoston and Peter Purton note that in the 1204–1324 period, late medieval Arabic texts used the same word for gunpowder, naft, that they used for an earlier incendiary, naphtha. Needham believes Ibn Khaldun was speaking of fire lances rather than hand cannon.",
"title": "History"
},
{
"paragraph_id": 24,
"text": "The Ottoman Empire made good use of cannon as siege artillery. Sixty-eight super-sized bombards were used by Mehmed the Conqueror to capture Constantinople in 1453. Jim Bradbury argues that Urban, a Hungarian cannon engineer, introduced this cannon from Central Europe to the Ottoman realm; according to Paul Hammer, however, it could have been introduced from other Islamic countries which had earlier used cannons. These cannon could fire heavy stone balls a mile, and the sound of their blast could reportedly be heard from a distance of 10 miles (16 km). Shkodëran historian Marin Barleti discusses Turkish bombards at length in his book De obsidione Scodrensi (1504), describing the 1478–79 siege of Shkodra in which eleven bombards and two mortars were employed. The Ottomans also used cannon to control passage of ships through the Bosphorus strait. Ottoman cannons also proved effective at stopping crusaders at Varna in 1444 and Kosovo in 1448 despite the presence of European cannon in the former case.",
"title": "History"
},
{
"paragraph_id": 25,
"text": "The similar Dardanelles Guns (for the location) were created by Munir Ali in 1464 and were still in use during the Anglo-Turkish War (1807–1809). These were cast in bronze into two parts: the chase (the barrel) and the breech, which combined weighed 18.4 tonnes. The two parts were screwed together using levers to facilitate moving it.",
"title": "History"
},
{
"paragraph_id": 26,
"text": "Fathullah Shirazi, a Persian inhabitant of India who worked for Akbar in the Mughal Empire, developed a volley gun in the 16th century.",
"title": "History"
},
{
"paragraph_id": 27,
"text": "While there is evidence of cannons in Iran as early as 1405 they were not widespread. This changed following the increased use of firearms by Shah Ismail I, and the Iranian army used 500 cannons by the 1620s, probably captured from the Ottomans or acquired by allies in Europe. By 1443, Iranians were also making some of their own cannon, as Mir Khawand wrote of a 1200 kg metal piece being made by an Iranian rikhtegar which was most likely a cannon. Due to the difficulties of transporting cannon in mountainous terrain, their use was less common compared to their use in Europe.",
"title": "History"
},
{
"paragraph_id": 28,
"text": "Documentary evidence of cannons in Russia does not appear until 1382 and they were used only in sieges, often by the defenders. It was not until 1475 when Ivan III established the first Russian cannon foundry in Moscow that they began to produce cannons natively. The earliest surviving cannon from Russia dates to 1485.",
"title": "History"
},
{
"paragraph_id": 29,
"text": "Later on large cannons were known as bombards, ranging from three to five feet in length and were used by Dubrovnik and Kotor in defence during the later 14th century. The first bombards were made of iron, but bronze became more prevalent as it was recognized as more stable and capable of propelling stones weighing as much as 45 kilograms (99 lb). Around the same period, the Byzantine Empire began to accumulate its own cannon to face the Ottoman Empire, starting with medium-sized cannon 3 feet (0.91 m) long and of 10 in calibre. The earliest reliable recorded use of artillery in the region was against the Ottoman siege of Constantinople in 1396, forcing the Ottomans to withdraw. The Ottomans acquired their own cannon and laid siege to the Byzantine capital again in 1422. By 1453, the Ottomans used 68 Hungarian-made cannon for the 55-day bombardment of the walls of Constantinople, \"hurling the pieces everywhere and killing those who happened to be nearby\". The largest of their cannons was the Great Turkish Bombard, which required an operating crew of 200 men and 70 oxen, and 10,000 men to transport it. Gunpowder made the formerly devastating Greek fire obsolete, and with the final fall of Constantinople—which was protected by what were once the strongest walls in Europe—on 29 May 1453, \"it was the end of an era in more ways than one\".",
"title": "History"
},
{
"paragraph_id": 30,
"text": "Cannons were introduced to the Javanese Majapahit Empire when Kublai Khan's Mongol-Chinese army under the leadership of Ike Mese sought to invade Java in 1293. History of Yuan mentioned that the Mongol used a weapon called p'ao against Daha forces. This weapon is interpreted differently by researchers, it may be a trebuchet that throws thunderclap bombs, firearms, cannons, or rockets. It is possible that the gunpowder weapons carried by the Mongol–Chinese troops amounted to more than one type.",
"title": "History"
},
{
"paragraph_id": 31,
"text": "Thomas Stamford Raffles wrote in The History of Java that in 1247 saka (1325 AD), cannons were widely used in Java especially by the Majapahit. It is recorded that the small kingdoms in Java that sought the protection of Majapahit had to hand over their cannons to the Majapahit. Majapahit under Mahapatih (prime minister) Gajah Mada (in office 1331–1364) utilized gunpowder technology obtained from Yuan dynasty for use in naval fleet.",
"title": "History"
},
{
"paragraph_id": 32,
"text": "Mongol-Chinese gunpowder technology of Yuan dynasty resulted in eastern-style cetbang which is similar to Chinese cannon. Swivel guns however, only developed in the archipelago because of the close maritime relations of the Nusantara archipelago with the territory of West India after 1460 AD, which brought new types of gunpowder weapons to the archipelago, likely through Arab intermediaries. This weapon seems to be cannon and gun of Ottoman tradition, for example the prangi, which is a breech-loading swivel gun. A new type of cetbang, called the western-style cetbang, was derived from the Turkish prangi. Just like prangi, this cetbang is a breech-loading swivel gun made of bronze or iron, firing single rounds or scattershots (a large number of small bullets).",
"title": "History"
},
{
"paragraph_id": 33,
"text": "Cannons derived from western-style cetbang can be found in Nusantara, among others were lantaka and lela. Most lantakas were made of bronze and the earliest ones were breech-loaded. There is a trend toward muzzle-loading weapons during colonial times. When the Portuguese came to the archipelago, they referred to the breech-loading swivel gun as berço, while the Spaniards call it verso. A pole gun (bedil tombak) was recorded as being used by Java in 1413.",
"title": "History"
},
{
"paragraph_id": 34,
"text": "Duarte Barbosa c. 1514 said that the inhabitants of Java were great masters in casting artillery and very good artillerymen. They made many one-pounder cannon (cetbang or rentaka), long muskets, spingarde (arquebus), schioppi (hand cannon), Greek fire, guns (cannon), and other fireworks. Every place was considered excellent in casting artillery, and in the knowledge of using it. In 1513, the Javanese fleet led by Pati Unus sailed to attack Portuguese Malacca \"with much artillery made in Java, for the Javanese are skilled in founding and casting, and in all works in iron, over and above what they have in India\". By early 16th century, the Javanese already locally-producing large guns, some of them still survived until the present day and dubbed as \"sacred cannon\" or \"holy cannon\". These cannons varied between 180- and 260-pounders, weighing anywhere between 3 and 8 tons, length of them between 3 and 6 m (9.8 and 19.7 ft).",
"title": "History"
},
{
"paragraph_id": 35,
"text": "Cannons were used by the Ayutthaya Kingdom in 1352 during its invasion of the Khmer Empire. Within a decade large quantities of gunpowder could be found in the Khmer Empire. By the end of the century firearms were also used by the Trần dynasty.",
"title": "History"
},
{
"paragraph_id": 36,
"text": "Saltpeter harvesting was recorded by Dutch and German travelers as being common in even the smallest villages and was collected from the decomposition process of large dung hills specifically piled for the purpose. The Dutch punishment for possession of non-permitted gunpowder appears to have been amputation. Ownership and manufacture of gunpowder was later prohibited by the colonial Dutch occupiers. According to colonel McKenzie quoted in Sir Thomas Stamford Raffles' The History of Java (1817), the purest sulfur was supplied from a crater from a mountain near the straits of Bali.",
"title": "History"
},
{
"paragraph_id": 37,
"text": "In Africa, the Adal Sultanate and the Abyssinian Empire both deployed cannons during the Adal-Abyssinian War. Imported from Arabia, and the wider Islamic world, the Adalites led by Ahmed ibn Ibrahim al-Ghazi were the first African power to introduce cannon warfare to the African continent. Later on as the Portuguese Empire entered the war it would supply and train the Abyssinians with cannons, while the Ottoman Empire sent soldiers and cannon to back Adal. The conflict proved, through their use on both sides, the value of firearms such as the matchlock musket, cannon, and the arquebus over traditional weapons.",
"title": "History"
},
{
"paragraph_id": 38,
"text": "While previous smaller guns could burn down structures with fire, larger cannons were so effective that engineers were forced to develop stronger castle walls to prevent their keeps from falling. Nonetheless, cannons were used other purposes than battering down walls as fortifications began using cannons as defensive instruments such as an example in India where the fort of Raicher had gun ports built into its walls to accommodate the use of defensive cannons. In The Art of War, Niccolò Machiavelli opined that field artillery forced an army to take up a defensive posture and this opposed a more ideal offensive stance. Machiavelli's concerns can be seen in the criticisms of Portuguese mortars being used in India during the sixteenth century as lack of mobility was one of the key problems with the design. In Russia the early cannons were again placed in forts as a defensive tool. Cannon were also difficult to move around in certain types of terrain with mountains providing a great obstacle for them, for these reasons offensives conducted with cannons would be difficult to pull off in places such as Iran.",
"title": "History"
},
{
"paragraph_id": 39,
"text": "By the 16th century, cannons were made in a great variety of lengths and bore diameters, but the general rule was that the longer the barrel, the longer the range. Some cannons made during this time had barrels exceeding 10 ft (3.0 m) in length, and could weigh up to 20,000 pounds (9,100 kg). Consequently, large amounts of gunpowder were needed to allow them to fire stone balls several hundred yards. By mid-century, European monarchs began to classify cannons to reduce the confusion. Henry II of France opted for six sizes of cannon, but others settled for more; the Spanish used twelve sizes, and the English sixteen. They are, from largest to smallest: the cannon royal, cannon, cannon serpentine, bastard cannon, demicannon, pedrero, culverin, basilisk, demiculverin, bastard culverin, saker, minion, falcon, falconet, serpentine, and rabinet. Better powder had been developed by this time as well. Instead of the finely ground powder used by the first bombards, powder was replaced by a \"corned\" variety of coarse grains. This coarse powder had pockets of air between grains, allowing fire to travel through and ignite the entire charge quickly and uniformly.",
"title": "History"
},
{
"paragraph_id": 40,
"text": "The end of the Middle Ages saw the construction of larger, more powerful cannon, as well as their spread throughout the world. As they were not effective at breaching the newer fortifications resulting from the development of cannon, siege engines—such as siege towers and trebuchets—became less widely used. However, wooden \"battery-towers\" took on a similar role as siege towers in the gunpowder age—such as that used at Siege of Kazan in 1552, which could hold ten large-calibre cannon, in addition to 50 lighter pieces. Another notable effect of cannon on warfare during this period was the change in conventional fortifications. Niccolò Machiavelli wrote, \"There is no wall, whatever its thickness that artillery will not destroy in only a few days.\" Although castles were not immediately made obsolete by cannon, their use and importance on the battlefield rapidly declined. Instead of majestic towers and merlons, the walls of new fortresses were thick, angled, and sloped, while towers became low and stout; increasing use was also made of earth and brick in breastworks and redoubts. These new defences became known as bastion forts, after their characteristic shape which attempted to force any advance towards it directly into the firing line of the guns. A few of these featured cannon batteries, such as the House of Tudor's Device Forts in England. Bastion forts soon replaced castles in Europe and, eventually, those in the Americas as well.",
"title": "History"
},
{
"paragraph_id": 41,
"text": "By the end of the 15th century, several technological advancements made cannons more mobile. Wheeled gun carriages and trunnions became common, and the invention of the limber further facilitated transportation. As a result, field artillery became more viable, and began to see more widespread use, often alongside the larger cannons intended for sieges. Better gunpowder, cast-iron projectiles (replacing stone), and the standardisation of calibres meant that even relatively light cannons could be deadly. In The Art of War, Niccolò Machiavelli observed that \"It is true that the arquebuses and the small artillery do much more harm than the heavy artillery.\" This was the case at the Battle of Flodden, in 1513: the English field guns outfired the Scottish siege artillery, firing two or three times as many rounds. Despite the increased maneuverability, however, cannon were still the slowest component of the army: a heavy English cannon required 23 horses to transport, while a culverin needed nine. Even with this many animals pulling, they still moved at a walking pace. Due to their relatively slow speed, and lack of organisation, and undeveloped tactics, the combination of pike and shot still dominated the battlefields of Europe.",
"title": "History"
},
{
"paragraph_id": 42,
"text": "Innovations continued, notably the German invention of the mortar, a thick-walled, short-barrelled gun that blasted shot upward at a steep angle. Mortars were useful for sieges, as they could hit targets behind walls or other defences. This cannon found more use with the Dutch, who learnt to shoot bombs filled with powder from them. Setting the bomb fuse was a problem. \"Single firing\" was first used to ignite the fuse, where the bomb was placed with the fuse down against the cannon's propellant. This often resulted in the fuse being blown into the bomb, causing it to blow up as it left the mortar. Because of this, \"double firing\" was tried where the gunner lit the fuse and then the touch hole. This required considerable skill and timing, and was especially dangerous if the gun misfired, leaving a lighted bomb in the barrel. Not until 1650 was it accidentally discovered that double-lighting was superfluous as the heat of firing would light the fuse.",
"title": "History"
},
{
"paragraph_id": 43,
"text": "Gustavus Adolphus of Sweden emphasised the use of light cannon and mobility in his army, and created new formations and tactics that revolutionised artillery. He discontinued using all 12 pounder—or heavier—cannon as field artillery, preferring, instead, to use cannons that could be handled by only a few men. One obsolete type of gun, the \"leatheren\", was replaced by 4 pounder and 9 pounder demi-culverins. These could be operated by three men, and pulled by only two horses. Gustavus Adolphus's army was also the first to use a cartridge that contained both powder and shot which sped up reloading, increasing the rate of fire. Finally, against infantry he pioneered the use of canister shot—essentially a tin can filled with musket balls. Until then there was no more than one cannon for every thousand infantrymen on the battlefield but Gustavus Adolphus increased the number of cannons sixfold. Each regiment was assigned two pieces, though he often arranged them into batteries instead of distributing them piecemeal. He used these batteries to break his opponent's infantry line, while his cavalry would outflank their heavy guns.",
"title": "History"
},
{
"paragraph_id": 44,
"text": "At the Battle of Breitenfeld, in 1631, Adolphus proved the effectiveness of the changes made to his army, by defeating Johann Tserclaes, Count of Tilly. Although severely outnumbered, the Swedes were able to fire between three and five times as many volleys of artillery, and their infantry's linear formations helped ensure they did not lose any ground. Battered by cannon fire, and low on morale, Tilly's men broke ranks and fled.",
"title": "History"
},
{
"paragraph_id": 45,
"text": "In England, cannons were being used to besiege various fortified buildings during the English Civil War. Nathaniel Nye is recorded as testing a Birmingham cannon in 1643 and experimenting with a saker in 1645. From 1645 he was the master gunner to the Parliamentarian garrison at Evesham and in 1646 he successfully directed the artillery at the Siege of Worcester, detailing his experiences and in his 1647 book The Art of Gunnery. Believing that war was as much a science as an art, his explanations focused on triangulation, arithmetic, theoretical mathematics, and cartography as well as practical considerations such as the ideal specification for gunpowder or slow matches. His book acknowledged mathematicians such as Robert Recorde and Marcus Jordanus as well as earlier military writers on artillery such as Niccolò Fontana Tartaglia and Thomas (or Francis) Malthus (author of A Treatise on Artificial Fire-Works).",
"title": "History"
},
{
"paragraph_id": 46,
"text": "Around this time also came the idea of aiming the cannon to hit a target. Gunners controlled the range of their cannons by measuring the angle of elevation, using a \"gunner's quadrant\". Cannons did not have sights; therefore, even with measuring tools, aiming was still largely guesswork.",
"title": "History"
},
{
"paragraph_id": 47,
"text": "In the latter half of the 17th century, the French engineer Sébastien Le Prestre de Vauban introduced a more systematic and scientific approach to attacking gunpowder fortresses, in a time when many field commanders \"were notorious dunces in siegecraft\". Careful sapping forward, supported by enfilading ricochets, was a key feature of this system, and it even allowed Vauban to calculate the length of time a siege would take. He was also a prolific builder of bastion forts, and did much to popularize the idea of \"depth in defence\" in the face of cannon. These principles were followed into the mid-19th century, when changes in armaments necessitated greater depth defence than Vauban had provided for. It was only in the years prior to World War I that new works began to break radically away from his designs.",
"title": "History"
},
{
"paragraph_id": 48,
"text": "The lower tier of 17th-century English ships of the line were usually equipped with demi-cannons, guns that fired a 32-pound (15 kg) solid shot, and could weigh up to 3,400 pounds (1,500 kg). Demi-cannons were capable of firing these heavy metal balls with such force that they could penetrate more than a metre of solid oak, from a distance of 90 m (300 ft), and could dismast even the largest ships at close range. Full cannon fired a 42-pound (19 kg) shot, but were discontinued by the 18th century, as they were too unwieldy. By the end of the 18th century, principles long adopted in Europe specified the characteristics of the Royal Navy's cannon, as well as the acceptable defects, and their severity. The United States Navy tested guns by measuring them, firing them two or three times—termed \"proof by powder\"—and using pressurized water to detect leaks.",
"title": "History"
},
{
"paragraph_id": 49,
"text": "The carronade was adopted by the Royal Navy in 1779; the lower muzzle velocity of the round shot when fired from this cannon was intended to create more wooden splinters when hitting the structure of an enemy vessel, as they were believed to be more deadly than the ball by itself. The carronade was much shorter, and weighed between a third to a quarter of the equivalent long gun; for example, a 32-pounder carronade weighed less than a ton, compared with a 32-pounder long gun, which weighed over 3 tons. The guns were, therefore, easier to handle, and also required less than half as much gunpowder, allowing fewer men to crew them. Carronades were manufactured in the usual naval gun calibres, but were not counted in a ship of the line's rated number of guns. As a result, the classification of Royal Navy vessels in this period can be misleading, as they often carried more cannons than were listed.",
"title": "History"
},
{
"paragraph_id": 50,
"text": "Cannons were crucial in Napoleon's rise to power, and continued to play an important role in his army in later years. During the French Revolution, the unpopularity of the Directory led to riots and rebellions. When over 25,000 royalists led by General Danican assaulted Paris, Paul Barras was appointed to defend the capital; outnumbered five to one and disorganised, the Republicans were desperate. When Napoleon arrived, he reorganised the defences but realised that without cannons the city could not be held. He ordered Joachim Murat to bring the guns from the Sablons artillery park; the Major and his cavalry fought their way to the recently captured cannons, and brought them back to Napoleon. When Danican's poorly trained men attacked, on 13 Vendémiaire 1795 (5 October in the calendar used in France at the time), Napoleon ordered his cannon to fire grapeshot into the mob, an act that became known as the \"whiff of grapeshot\". The slaughter effectively ended the threat to the new government, while, at the same time, making Bonaparte a famous—and popular—public figure. Among the first generals to recognise that artillery was not being used to its full potential, Napoleon often massed his cannon into batteries and introduced several changes into the French artillery, improving it significantly and making it among the finest in Europe. Such tactics were successfully used by the French, for example, at the Battle of Friedland, when 66 guns fired a total of 3,000 roundshot and 500 rounds of grapeshot, inflicting severe casualties to the Russian forces, whose losses numbered over 20,000 killed and wounded, in total. At the Battle of Waterloo—Napoleon's final battle—the French army had many more artillery pieces than either the British or Prussians. As the battlefield was muddy, recoil caused cannons to bury themselves into the ground after firing, resulting in slow rates of fire, as more effort was required to move them back into an adequate firing position; also, roundshot did not ricochet with as much force from the wet earth. Despite the drawbacks, sustained artillery fire proved deadly during the engagement, especially during the French cavalry attack. The British infantry, having formed infantry squares, took heavy losses from the French guns, while their own cannons fired at the cuirassiers and lancers, when they fell back to regroup. Eventually, the French ceased their assault, after taking heavy losses from the British cannon and musket fire.",
"title": "History"
},
{
"paragraph_id": 51,
"text": "In the 1810s and 1820s, greater emphasis was placed on the accuracy of long-range gunfire, and less on the weight of a broadside. Around 1822, George Marshall wrote Marshall's Practical Marine Gunnery. The book was used by cannon operators in the United States Navy throughout the 19th century. It listed all the types of cannons and instructions.",
"title": "History"
},
{
"paragraph_id": 52,
"text": "The carronade, although initially very successful and widely adopted, disappeared from the Royal Navy in the 1850s after the development of wrought-iron-jacketed steel cannon by William Armstrong and Joseph Whitworth. Nevertheless, carronades were used in the American Civil War.",
"title": "History"
},
{
"paragraph_id": 53,
"text": "Western cannons during the 19th century became larger, more destructive, more accurate, and could fire at longer range. One example is the American 3-inch (76 mm) wrought-iron, muzzle-loading rifle, or Griffen gun (usually called the 3-inch Ordnance Rifle), used during the American Civil War, which had an effective range of over 1.1 mi (1.8 km). Another is the smoothbore 12-pounder Napoleon, which originated in France in 1853 and was widely used by both sides in the American Civil War. This cannon was renowned for its sturdiness, reliability, firepower, flexibility, relatively lightweight, and range of 1,700 m (5,600 ft).",
"title": "History"
},
{
"paragraph_id": 54,
"text": "The practice of rifling—casting spiralling lines inside the cannon's barrel—was applied to artillery more frequently by 1855, as it gave cannon projectiles gyroscopic stability, which improved their accuracy. One of the earliest rifled cannons was the breech-loading Armstrong Gun—also invented by William Armstrong—which boasted significantly improved range, accuracy, and power than earlier weapons. The projectile fired from the Armstrong gun could reportedly pierce through a ship's side and explode inside the enemy vessel, causing increased damage and casualties. The British military adopted the Armstrong gun, and was impressed; the Duke of Cambridge even declared that it \"could do everything but speak\". Despite being significantly more advanced than its predecessors, the Armstrong gun was rejected soon after its integration, in favour of the muzzle-loading pieces that had been in use before. While both types of gun were effective against wooden ships, neither had the capability to pierce the armour of ironclads; due to reports of slight problems with the breeches of the Armstrong gun, and their higher cost, the older muzzle-loaders were selected to remain in service instead. Realising that iron was more difficult to pierce with breech-loaded cannons, Armstrong designed rifled muzzle-loading guns, which proved successful; The Times reported: \"even the fondest believers in the invulnerability of our present ironclads were obliged to confess that against such artillery, at such ranges, their plates and sides were almost as penetrable as wooden ships.\"",
"title": "History"
},
{
"paragraph_id": 55,
"text": "The superior cannon of the Western world brought them tremendous advantages in warfare. For example, in the First Opium War in China, during the 19th century, British battleships bombarded the coastal areas and fortifications from afar, safe from the reach of the Chinese cannons. Similarly, the shortest war in recorded history, the Anglo-Zanzibar War of 1896, was brought to a swift conclusion by shelling from British cruisers. The cynical attitude towards recruited infantry in the face of ever more powerful field artillery is the source of the term cannon fodder, first used by François-René de Chateaubriand, in 1814; however, the concept of regarding soldiers as nothing more than \"food for powder\" was mentioned by William Shakespeare as early as 1598, in Henry IV, Part 1.",
"title": "History"
},
{
"paragraph_id": 56,
"text": "Cannons in the 20th and 21st centuries are usually divided into sub-categories and given separate names. Some of the most widely used types of modern cannon are howitzers, mortars, guns, and autocannon, although a few very large-calibre cannon, custom-designed, have also been constructed. Nuclear artillery was experimented with, but was abandoned as impractical. Modern artillery is used in a variety of roles, depending on its type. According to NATO, the general role of artillery is to provide fire support, which is defined as \"the application of fire, coordinated with the manoeuvre of forces to destroy, neutralize, or suppress the enemy\".",
"title": "History"
},
{
"paragraph_id": 57,
"text": "When referring to cannons, the term gun is often used incorrectly. In military usage, a gun is a cannon with a high muzzle velocity and a flat trajectory, useful for hitting the sides of targets such as walls, as opposed to howitzers or mortars, which have lower muzzle velocities, and fire indirectly, lobbing shells up and over obstacles to hit the target from above.",
"title": "History"
},
{
"paragraph_id": 58,
"text": "By the early 20th century, infantry weapons had become more powerful, forcing most artillery away from the front lines. Despite the change to indirect fire, cannons proved highly effective during World War I, directly or indirectly causing over 75% of casualties. The onset of trench warfare after the first few months of World War I greatly increased the demand for howitzers, as they were more suited at hitting targets in trenches. Furthermore, their shells carried more explosives than those of guns, and caused considerably less barrel wear. The German army had the advantage here as they began the war with many more howitzers than the French. World War I also saw the use of the Paris Gun, the longest-ranged gun ever fired. This 200 mm (8 in) calibre gun was used by the Germans against Paris and could hit targets more than 122 km (76 mi) away.",
"title": "History"
},
{
"paragraph_id": 59,
"text": "The Second World War sparked new developments in cannon technology. Among them were sabot rounds, hollow-charge projectiles, and proximity fuses, all of which increased the effectiveness of cannon against specific target. The proximity fuse emerged on the battlefields of Europe in late December 1944. Used to great effect in anti-aircraft projectiles, proximity fuses were fielded in both the European and Pacific Theatres of Operations; they were particularly useful against V-1 flying bombs and kamikaze planes. Although widely used in naval warfare, and in anti-air guns, both the British and Americans feared unexploded proximity fuses would be reverse engineered, leading to them limiting their use in continental battles. During the Battle of the Bulge, however, the fuses became known as the American artillery's \"Christmas present\" for the German army because of their effectiveness against German personnel in the open, when they frequently dispersed attacks. Anti-tank guns were also tremendously improved during the war: in 1939, the British used primarily 2 pounder and 6 pounder guns. By the end of the war, 17 pounders had proven much more effective against German tanks, and 32 pounders had entered development. Meanwhile, German tanks were continuously upgraded with better main guns, in addition to other improvements. For example, the Panzer III was originally designed with a 37 mm gun, but was mass-produced with a 50 mm cannon. To counter the threat of the Russian T-34s, another, more powerful 50 mm gun was introduced, only to give way to a larger 75 mm cannon, which was in a fixed mount as the StuG III, the most-produced German World War II armoured fighting vehicle of any type. Despite the improved guns, production of the Panzer III was ended in 1943, as the tank still could not match the T-34, and was replaced by the Panzer IV and Panther tanks. In 1944, the 8.8 cm KwK 43 and many variations, entered service with the Wehrmacht, and was used as both a tank main gun, and as the PaK 43 anti-tank gun. One of the most powerful guns to see service in World War II, it was capable of destroying any Allied tank at very long ranges.",
"title": "History"
},
{
"paragraph_id": 60,
"text": "Despite being designed to fire at trajectories with a steep angle of descent, howitzers can be fired directly, as was done by the 11th Marine Regiment at the Battle of Chosin Reservoir, during the Korean War. Two field batteries fired directly upon a battalion of Chinese infantry; the Marines were forced to brace themselves against their howitzers, as they had no time to dig them in. The Chinese infantry took heavy casualties, and were forced to retreat.",
"title": "History"
},
{
"paragraph_id": 61,
"text": "The tendency to create larger calibre cannons during the World Wars has reversed since. The United States Army, for example, sought a lighter, more versatile howitzer, to replace their ageing pieces. As it could be towed, the M198 was selected to be the successor to the World War II–era cannons used at the time, and entered service in 1979. Still in use today, the M198 is, in turn, being slowly replaced by the M777 Ultralightweight howitzer, which weighs nearly half as much and can be more easily moved. Although land-based artillery such as the M198 are powerful, long-ranged, and accurate, naval guns have not been neglected, despite being much smaller than in the past, and, in some cases, having been replaced by cruise missiles. However, the Zumwalt-class destroyer's planned armament included the Advanced Gun System (AGS), a pair of 155 mm guns, which fire the Long Range Land-Attack Projectile. The warhead, which weighted 24 pounds (11 kg), had a circular error of probability of 50 m (160 ft), and was mounted on a rocket, to increase the effective range to 100 nmi (190 km), further than that of the Paris Gun. The AGS's barrels would be water cooled, and fire 10 rounds per minute, per gun. The combined firepower from both turrets would give a Zumwalt-class destroyer the firepower equivalent to 12 conventional M198 howitzers. The reason for the re-integration of cannons as a main armament in United States Navy ships was because satellite-guided munitions fired from a gun would be less expensive than a cruise missile but have a similar guidance capability.",
"title": "History"
},
{
"paragraph_id": 62,
"text": "Autocannons have an automatic firing mode, similar to that of a machine gun. They have mechanisms to automatically load their ammunition, and therefore have a higher rate of fire than artillery, often approaching, or, in the case of rotary autocannons, even surpassing the firing rate of a machine gun. While there is no minimum bore for autocannons, they are generally larger than machine guns, typically 20 mm or greater since World War II and are usually capable of using explosive ammunition even if it is not always used. Machine guns in contrast are usually too small to use explosive ammunition; such ammunition is additionally banned in international conflict for the parties to the Saint Petersburg Declaration of 1868.",
"title": "History"
},
{
"paragraph_id": 63,
"text": "Most nations use rapid-fire cannon on light vehicles, replacing a more powerful, but heavier, tank gun. A typical autocannon is the 25 mm \"Bushmaster\" chain gun, mounted on the LAV-25 and M2 Bradley armoured vehicles. Autocannons may be capable of a very high rate of fire, but ammunition is heavy and bulky, limiting the amount carried. For this reason, both the 25 mm Bushmaster and the 30 mm RARDEN are deliberately designed with relatively low rates of fire. The typical rate of fire for a modern autocannon ranges from 90 to 1,800 rounds per minute. Systems with multiple barrels, such as a rotary autocannon, can have rates of fire of more than several thousand rounds per minute. The fastest of these is the GSh-6-23, which has a rate of fire of over 10,000 rounds per minute.",
"title": "History"
},
{
"paragraph_id": 64,
"text": "Autocannons are often found in aircraft, where they replaced machine guns and as shipboard anti-aircraft weapons, as they provide greater destructive power than machine guns.",
"title": "History"
},
{
"paragraph_id": 65,
"text": "The first documented installation of a cannon on an aircraft was on the Voisin Canon in 1911, displayed at the Paris Exposition that year. By World War I, all of the major powers were experimenting with aircraft-mounted cannons; however their low rate of fire and great size and weight precluded any of them from being anything other than experimental. The most successful (or least unsuccessful) was the SPAD 12 Ca.1 with a single 37mm Puteaux mounted to fire between the cylinder banks and through the propeller boss of the aircraft's Hispano-Suiza 8C. The pilot (by necessity an ace) had to manually reload each round.",
"title": "History"
},
{
"paragraph_id": 66,
"text": "The first autocannon were developed during World War I as anti-aircraft guns, and one of these, the Coventry Ordnance Works \"COW 37 mm gun\", was installed in an aircraft. However, the war ended before it could be given a field trial, and it never became standard equipment in a production aircraft. Later trials had it fixed at a steep angle upwards in both the Vickers Type 161 and the Westland C.O.W. Gun Fighter, an idea that would return later.",
"title": "History"
},
{
"paragraph_id": 67,
"text": "During this period autocannons became available and several fighters of the German Luftwaffe and the Imperial Japanese Navy Air Service were fitted with 20 mm cannons. They continued to be installed as an adjunct to machine guns rather than as a replacement, as the rate of fire was still too low and the complete installation too heavy. There was a some debate in the RAF as to whether the greater number of possible rounds being fired from a machine gun, or a smaller number of explosive rounds from a cannon was preferable. Improvements during the war in regards to rate of fire allowed the cannon to displace the machine gun almost entirely. The cannon was more effective against armour so they were increasingly used during the course of World War II, and newer fighters such as the Hawker Tempest usually carried two or four versus the six .50 Browning machine guns for US aircraft or eight to twelve M1919 Browning machine guns on earlier British aircraft. The Hispano-Suiza HS.404, Oerlikon 20 mm cannon, MG FF, and their numerous variants became among the most widely used autocannon in the war. Cannons, as with machine guns, were generally fixed to fire forwards (mounted in the wings, in the nose or fuselage, or in a pannier under either); or were mounted in gun turrets on heavier aircraft. Both the Germans and Japanese mounted cannons to fire upwards and forwards for use against heavy bombers, with the Germans calling guns so-installed Schräge Musik. This term derives from a German colloquialism for jazz music (schräg means \"off-key\").",
"title": "History"
},
{
"paragraph_id": 68,
"text": "Preceding the Vietnam War the high speeds aircraft were attaining led to a move to remove the cannon due to the mistaken belief that they would be useless in a dogfight, but combat experience during the Vietnam War showed conclusively that despite advances in missiles, there was still a need for them. Nearly all modern fighter aircraft are armed with an autocannon and they are also commonly found on ground-attack aircraft. One of the most powerful examples is the 30mm GAU-8/A Avenger Gatling-type rotary cannon, mounted exclusively on the Fairchild Republic A-10 Thunderbolt II. The Lockheed AC-130 gunship (a converted transport) can carry a 105 mm howitzer as well as a variety of autocannons ranging up to 40 mm. Both are used in the close air support role.",
"title": "History"
},
{
"paragraph_id": 69,
"text": "Cannons in general have the form of a truncated cone with an internal cylindrical bore for holding an explosive charge and a projectile. The thickest, strongest, and closed part of the cone is located near the explosive charge. As any explosive charge will dissipate in all directions equally, the thickest portion of the cannon is useful for containing and directing this force. The backward motion of the cannon as its projectile leaves the bore is termed its recoil, and the effectiveness of the cannon can be measured in terms of how much this response can be diminished, though obviously diminishing recoil through increasing the overall mass of the cannon means decreased mobility.",
"title": "Materials, parts, and terms"
},
{
"paragraph_id": 70,
"text": "Field artillery cannon in Europe and the Americas were initially made most often of bronze, though later forms were constructed of cast iron and eventually steel. Bronze has several characteristics that made it preferable as a construction material: although it is relatively expensive, does not always alloy well, and can result in a final product that is \"spongy about the bore\", bronze is more flexible than iron and therefore less prone to bursting when exposed to high pressure; cast-iron cannon are less expensive and more durable generally than bronze and withstand being fired more times without deteriorating. However, cast-iron cannon have a tendency to burst without having shown any previous weakness or wear, and this makes them more dangerous to operate.",
"title": "Materials, parts, and terms"
},
{
"paragraph_id": 71,
"text": "The older and more-stable forms of cannon were muzzle-loading as opposed to breech-loading—to be used they had to have their ordnance packed down the bore through the muzzle rather than inserted through the breech.",
"title": "Materials, parts, and terms"
},
{
"paragraph_id": 72,
"text": "The following terms refer to the components or aspects of a classical western cannon (c. 1850) as illustrated here. In what follows, the words near, close, and behind will refer to those parts towards the thick, closed end of the piece, and far, front, in front of, and before to the thinner, open end.",
"title": "Materials, parts, and terms"
},
{
"paragraph_id": 73,
"text": "The main body of a cannon consists of three basic extensions: the foremost and the longest is called the chase, the middle portion is the reinforce, and the closest and briefest portion is the cascabel or cascable.",
"title": "Materials, parts, and terms"
},
{
"paragraph_id": 74,
"text": "To pack a muzzle-loading cannon, first gunpowder is poured down the bore. This is followed by a layer of wadding (often nothing more than paper), and then the cannonball itself. A certain amount of windage allows the ball to fit down the bore, though the greater the windage the less efficient the propulsion of the ball when the gunpowder is ignited. To fire the cannon, the fuse located in the vent is lit, quickly burning down to the gunpowder, which then explodes violently, propelling wadding and ball down the bore and out of the muzzle. A small portion of exploding gas also escapes through the vent, but this does not dramatically affect the total force exerted on the ball.",
"title": "Materials, parts, and terms"
},
{
"paragraph_id": 75,
"text": "Any large, smoothbore, muzzle-loading gun—used before the advent of breech-loading, rifled guns—may be referred to as a cannon, though once standardised names were assigned to different-sized cannon, the term specifically referred to a gun designed to fire a 42-pound (19 kg) shot, as distinct from a demi-cannon – 32 pounds (15 kg), culverin – 18 pounds (8.2 kg), or demi-culverin – 9 pounds (4.1 kg). Gun specifically refers to a type of cannon that fires projectiles at high speeds, and usually at relatively low angles; they have been used in warships, and as field artillery. The term cannon is also used for autocannon, a modern repeating weapon firing explosive projectiles. Cannon have been used extensively in fighter aircraft since World War II.",
"title": "Materials, parts, and terms"
},
{
"paragraph_id": 76,
"text": "In the 1770s, cannon operation worked as follows: each cannon would be manned by two gunners, six soldiers, and four officers of artillery. The right gunner was to prime the piece and load it with powder, and the left gunner would fetch the powder from the magazine and be ready to fire the cannon at the officer's command. On each side of the cannon, three soldiers stood, to ram and sponge the cannon, and hold the ladle. The second soldier on the left was tasked with providing 50 bullets.",
"title": "Operation"
},
{
"paragraph_id": 77,
"text": "Before loading, the cannon would be cleaned with a wet sponge to extinguish any smouldering material from the last shot. Fresh powder could be set off prematurely by lingering ignition sources. The powder was added, followed by wadding of paper or hay, and the ball was placed in and rammed down. After ramming, the cannon would be aimed with the elevation set using a quadrant and a plummet. At 45 degrees, the ball had the utmost range: about ten times the gun's level range. Any angle above a horizontal line was called random-shot. Wet sponges were used to cool the pieces every ten or twelve rounds.",
"title": "Operation"
},
{
"paragraph_id": 78,
"text": "During the Napoleonic Wars, a British gun team consisted of five gunners to aim it, clean the bore with a damp sponge to quench any remaining embers before a fresh charge was introduced, and another to load the gun with a bag of powder and then the projectile. The fourth gunner pressed his thumb on the vent hole, to prevent a draught that might fan a flame. The charge loaded, the fourth would prick the bagged charge through the vent hole, and fill the vent with powder. On command, the fifth gunner would fire the piece with a slow match. Friction primers replaced slow match ignition by the mid-19th century.",
"title": "Operation"
},
{
"paragraph_id": 79,
"text": "When a cannon had to be abandoned such as in a retreat or surrender, the touch hole of the cannon would be plugged flush with an iron spike, disabling the cannon (at least until metal boring tools could be used to remove the plug). This was called \"spiking\".",
"title": "Operation"
},
{
"paragraph_id": 80,
"text": "A gun was said to be honeycombed when the surface of the bore had cavities, or holes in it, caused by corrosion or casting defects.",
"title": "Operation"
},
{
"paragraph_id": 81,
"text": "In the United States, muzzleloading cannons made before 1899 (and replicas) that are unable to fire fixed ammunition are considered antiques. They are not subject to the Gun Control Act of 1968 or National Firearms Act of 1934. They may be subject to local rules in some jurisdictions, however.",
"title": "Operation"
},
{
"paragraph_id": 82,
"text": "Historically, logs or poles have been used as decoys to mislead the enemy as to the strength of an emplacement. The \"Quaker Gun trick\" was used by Colonel William Washington's Continental Army during the American Revolutionary War; in 1780, approximately 100 Loyalists surrendered to them, rather than face bombardment. During the American Civil War, Quaker guns were also used by the Confederates, to compensate for their shortage of artillery. The decoy cannon were painted black at the \"muzzle\", and positioned behind fortifications to delay Union attacks on those positions. On occasion, real gun carriages were used to complete the deception.",
"title": "Deceptive use"
},
{
"paragraph_id": 83,
"text": "Cannon sounds have sometimes been used in classical pieces with a military theme. One of the best known examples is Pyotr Ilyich Tchaikovsky's 1812 Overture. The overture is to be performed using an artillery section together with the orchestra, resulting in noise levels high enough that musicians are required to wear ear protection. The cannon fire simulates Russian artillery bombardments of the Battle of Borodino, a critical battle in Napoleon's invasion of Russia, whose defeat the piece celebrates. When the overture was first performed, the cannon were fired by an electric current triggered by the conductor. However, the overture was not recorded with real cannon fire until Mercury Records and conductor Antal Doráti's 1958 recording of the Minnesota Orchestra. Cannon fire is also frequently used in presentations of the 1812 on the American Independence Day, a tradition started by Arthur Fiedler of the Boston Pops in 1974.",
"title": "In popular culture"
},
{
"paragraph_id": 84,
"text": "The hard rock band AC/DC used cannon in their song \"For Those About to Rock (We Salute You)\", and in live shows replica Napoleonic cannon and pyrotechnics were used to perform the piece. A recording of that song has accompanied the firing of an authentic reproduction of a M1857 12-pounder Napoleon during Columbus Blue Jackets goal celebrations at Nationwide Arena since opening night of the 2007–08 season. The cannon is the focal point of the team's alternate logo on its third jerseys.",
"title": "In popular culture"
},
{
"paragraph_id": 85,
"text": "Cannons have been fired in touchdown celebrations by several American football teams including the San Diego Chargers. The Pittsburgh Steelers used one only during the 1962 campaign but discontinued it after Buddy Dial was startled by inadvertently running face-first into the cannon's smoky discharge in a 42–27 loss to the Dallas Cowboys.",
"title": "In popular culture"
},
{
"paragraph_id": 86,
"text": "Cannon recovered from the sea are often extensively damaged from exposure to salt water; electrolytic reduction treatment is required to forestall corrosion. The cannon is then washed in deionized water to remove the electrolyte, and is treated in tannic acid, which prevents further rust and gives the metal a bluish-black colour. Cannon on display may be protected from oxygen and moisture by a wax sealant. A coat of polyurethane may also be painted over the wax sealant, to prevent the cannon from attracting dust.",
"title": "Restoration"
}
] | A cannon is a large-caliber gun classified as a type of artillery, which usually launches a projectile using explosive chemical propellant. Gunpowder was the primary propellant before the invention of smokeless powder during the late 19th century. Cannons vary in gauge, effective range, mobility, rate of fire, angle of fire and firepower; different forms of cannon combine and balance these attributes in varying degrees, depending on their intended use on the battlefield. A cannon is a type of heavy artillery weapon. The word cannon is derived from several languages, in which the original definition can usually be translated as tube, cane, or reed. In the modern era, the term cannon has fallen into decline, replaced by guns or artillery, if not a more specific term such as howitzer or mortar, except for high-caliber automatic weapons firing bigger rounds than machine guns, called autocannons. The earliest known depiction of cannons appeared in Song dynasty China as early as the 12th century; however, solid archaeological and documentary evidence of cannons do not appear until the 13th century. In 1288, Yuan dynasty troops are recorded to have used hand cannon in combat, and the earliest extant cannon bearing a date of production comes from the same period. By the early 14th century, possible mentions of cannon had appeared in the Middle East and the depiction of one in Europe by 1326. Recorded usage of cannon began appearing almost immediately after. They subsequently spread to India, their usage on the subcontinent being first attested to in 1366. By the end of the 14th century, cannons were widespread throughout Eurasia. Cannons were used primarily as anti-infantry weapons until around 1374, when large cannons were recorded to have breached walls for the first time in Europe. Cannons featured prominently as siege weapons, and ever larger pieces appeared. In 1464 a 16,000 kg (35,000 lb) cannon known as the Great Turkish Bombard was created in the Ottoman Empire. Cannons as field artillery became more important after 1453, with the introduction of limber, which greatly improved cannon maneuverability and mobility. European cannons reached their longer, lighter, more accurate, and more efficient "classic form" around 1480. This classic European cannon design stayed relatively consistent in form with minor changes until the 1750s. | 2001-11-09T12:18:28Z | 2023-12-03T06:26:12Z | [
"Template:Hatnote group",
"Template:Refbegin",
"Template:Pp-semi-indef",
"Template:Reflist",
"Template:Cite web",
"Template:Harvnb",
"Template:In lang",
"Template:Dead link",
"Template:Featured article",
"Template:Convert",
"Template:Cite book",
"Template:Cite dictionary",
"Template:Harvard citation no brackets",
"Template:Commons and category",
"Template:US patent",
"Template:Use British English",
"Template:See also",
"Template:1771 Britannica",
"Template:Citation",
"Template:Webarchive",
"Template:Use dmy dates",
"Template:Lang",
"Template:Rp",
"Template:ISBN",
"Template:Cite journal",
"Template:Cannon",
"Template:Cvt",
"Template:Cite encyclopedia",
"Template:Main",
"Template:USS",
"Template:Cite news",
"Template:Wiktionary",
"Template:Authority control",
"Template:Sclass",
"Template:Cite ODNB",
"Template:Refend",
"Template:Short description",
"Template:Sfn",
"Template:Further",
"Template:Blockquote",
"Template:Page needed"
] | https://en.wikipedia.org/wiki/Cannon |
7,056 | Computer mouse | A computer mouse (plural mice, also mouses) is a hand-held pointing device that detects two-dimensional motion relative to a surface. This motion is typically translated into the motion of the pointer (called a cursor) on a display, which allows a smooth control of the graphical user interface of a computer.
The first public demonstration of a mouse controlling a computer system was done by Doug Engelbart in 1968 as part of the Mother of All Demos. Mice originally used two separate wheels to directly track movement across a surface: one in the x-dimension and one in the Y. Later, the standard design shifted to use a ball rolling on a surface to detect motion, in turn connected to internal rollers. Most modern mice use optical movement detection with no moving parts. Though originally all mice were connected to a computer by a cable, many modern mice are cordless, relying on short-range radio communication with the connected system.
In addition to moving a cursor, computer mice have one or more buttons to allow operations such as the selection of a menu item on a display. Mice often also feature other elements, such as touch surfaces and scroll wheels, which enable additional control and dimensional input.
The earliest known written use of the term mouse or mice in reference to a computer pointing device is in Bill English's July 1965 publication, "Computer-Aided Display Control". This likely originated from its resemblance to the shape and size of a mouse, with the cord resembling its tail. The popularity of wireless mice without cords makes the resemblance less obvious.
According to Roger Bates, a hardware designer under English, the term also came about because the cursor on the screen was for some unknown reason referred to as "CAT" and was seen by the team as if it would be chasing the new desktop device.
The plural for the small rodent is always "mice" in modern usage. The plural for a computer mouse is either "mice" or "mouses" according to most dictionaries, with "mice" being more common. The first recorded plural usage is "mice"; the online Oxford Dictionaries cites a 1984 use, and earlier uses include J. C. R. Licklider's "The Computer as a Communication Device" of 1968.
The trackball, a related pointing device, was invented in 1946 by Ralph Benjamin as part of a post-World War II-era fire-control radar plotting system called the Comprehensive Display System (CDS). Benjamin was then working for the British Royal Navy Scientific Service. Benjamin's project used analog computers to calculate the future position of target aircraft based on several initial input points provided by a user with a joystick. Benjamin felt that a more elegant input device was needed and invented what they called a "roller ball" for this purpose.
The device was patented in 1947, but only a prototype using a metal ball rolling on two rubber-coated wheels was ever built, and the device was kept as a military secret.
Another early trackball was built by Kenyon Taylor, a British electrical engineer working in collaboration with Tom Cranston and Fred Longstaff. Taylor was part of the original Ferranti Canada, working on the Royal Canadian Navy's DATAR (Digital Automated Tracking and Resolving) system in 1952.
DATAR was similar in concept to Benjamin's display. The trackball used four disks to pick up motion, two each for the X and Y directions. Several rollers provided mechanical support. When the ball was rolled, the pickup discs spun and contacts on their outer rim made periodic contact with wires, producing pulses of output with each movement of the ball. By counting the pulses, the physical movement of the ball could be determined. A digital computer calculated the tracks and sent the resulting data to other ships in a task force using pulse-code modulation radio signals. This trackball used a standard Canadian five-pin bowling ball. It was not patented, since it was a secret military project.
Douglas Engelbart of the Stanford Research Institute (now SRI International) has been credited in published books by Thierry Bardini, Paul Ceruzzi, Howard Rheingold, and several others as the inventor of the computer mouse. Engelbart was also recognized as such in various obituary titles after his death in July 2013.
By 1963, Engelbart had already established a research lab at SRI, the Augmentation Research Center (ARC), to pursue his objective of developing both hardware and software computer technology to "augment" human intelligence. That November, while attending a conference on computer graphics in Reno, Nevada, Engelbart began to ponder how to adapt the underlying principles of the planimeter to inputting X- and Y-coordinate data. On 14 November 1963, he first recorded his thoughts in his personal notebook about something he initially called a "bug", which is a "3-point" form could have a "drop point and 2 orthogonal wheels". He wrote that the "bug" would be "easier" and "more natural" to use, and unlike a stylus, it would stay still when let go, which meant it would be "much better for coordination with the keyboard".
In 1964, Bill English joined ARC, where he helped Engelbart build the first mouse prototype. They christened the device the mouse as early models had a cord attached to the rear part of the device which looked like a tail, and in turn, resembled the common mouse. According to Roger Bates, a hardware designer in English, another reason for choosing this name was because the cursor on the screen was also referred to as "CAT" at this time.
As noted above, this "mouse" was first mentioned in print in a July 1965 report, on which English was the lead author. On 9 December 1968, Engelbart publicly demonstrated the mouse at what would come to be known as The Mother of All Demos. Engelbart never received any royalties for it, as his employer SRI held the patent, which expired before the mouse became widely used in personal computers. In any event, the invention of the mouse was just a small part of Engelbart's much larger project of augmenting human intellect.
Several other experimental pointing-devices developed for Engelbart's oN-Line System (NLS) exploited different body movements – for example, head-mounted devices attached to the chin or nose – but ultimately the mouse won out because of its speed and convenience. The first mouse, a bulky device (pictured) used two potentiometers perpendicular to each other and connected to wheels: the rotation of each wheel translated into motion along one axis. At the time of the "Mother of All Demos", Engelbart's group had been using their second-generation, 3-button mouse for about a year.
On 2 October 1968, three years after Engelbart's prototype but more than two months before his public demo, a mouse device named Rollkugelsteuerung (German for "Trackball control") was shown in a sales brochure by the German company AEG-Telefunken as an optional input device for the SIG 100 vector graphics terminal, part of the system around their process computer TR 86 and the TR 440 [de] main frame. Based on an even earlier trackball device, the mouse device had been developed by the company in 1966 in what had been a parallel and independent discovery. As the name suggests and unlike Engelbart's mouse, the Telefunken model already had a ball (diameter 40 mm, weight 40 g) and two mechanical 4-bit rotational position transducers with Gray code-like states, allowing easy movement in any direction. The bits remained stable for at least two successive states to relax debouncing requirements. This arrangement was chosen so that the data could also be transmitted to the TR 86 front-end process computer and over longer distance telex lines with c. 50 baud. Weighing 465 grams (16.4 oz), the device with a total height of about 7 cm (2.8 in) came in a c. 12 cm (4.7 in) diameter hemispherical injection-molded thermoplastic casing featuring one central push button.
As noted above, the device was based on an earlier trackball-like device (also named Rollkugel) that was embedded into radar flight control desks. This trackball had been originally developed by a team led by Rainer Mallebrein [de] at Telefunken Konstanz for the German Bundesanstalt für Flugsicherung [de] (Federal Air Traffic Control). It was part of the corresponding workstation system SAP 300 and the terminal SIG 3001, which had been designed and developed since 1963. Development for the TR 440 main frame began in 1965. This led to the development of the TR 86 process computer system with its SIG 100-86 terminal. Inspired by a discussion with a university customer, Mallebrein came up with the idea of "reversing" the existing Rollkugel trackball into a moveable mouse-like device in 1966, so that customers did not have to be bothered with mounting holes for the earlier trackball device. The device was finished in early 1968, and together with light pens and trackballs, it was commercially offered as an optional input device for their system starting later that year. Not all customers opted to buy the device, which added costs of DM 1,500 per piece to the already up to 20-million DM deal for the main frame, of which only a total of 46 systems were sold or leased. They were installed at more than 20 German universities including RWTH Aachen, Technical University Berlin, University of Stuttgart and Konstanz. Several Rollkugel mice installed at the Leibniz Supercomputing Centre in Munich in 1972 are well preserved in a museum, two others survived in a museum at Stuttgart University, two in Hamburg, the one from Aachen at the Computer History Museum in the US, and yet another sample was recently donated to the Heinz Nixdorf MuseumsForum (HNF) in Paderborn. Anecdotal reports claim that Telefunken's attempt to patent the device was rejected by the German Patent Office due to lack of inventiveness. For the air traffic control system, the Mallebrein team had already developed a precursor to touch screens in form of an ultrasonic-curtain-based pointing device in front of the display. In 1970, they developed a device named "Touchinput-Einrichtung" ("touch input device") based on a conductively coated glass screen.
The Xerox Alto was one of the first computers designed for individual use in 1973 and is regarded as the first modern computer to use a mouse. Inspired by PARC's Alto, the Lilith, a computer which had been developed by a team around Niklaus Wirth at ETH Zürich between 1978 and 1980, provided a mouse as well. The third marketed version of an integrated mouse shipped as a part of a computer and intended for personal computer navigation came with the Xerox 8010 Star in 1981.
By 1982, the Xerox 8010 was probably the best-known computer with a mouse. The Sun-1 also came with a mouse, and the forthcoming Apple Lisa was rumored to use one, but the peripheral remained obscure; Jack Hawley of The Mouse House reported that one buyer for a large organization believed at first that his company sold lab mice. Hawley, who manufactured mice for Xerox, stated that "Practically, I have the market all to myself right now"; a Hawley mouse cost $415. In 1982, Logitech introduced the P4 Mouse at the Comdex trade show in Las Vegas, its first hardware mouse. That same year Microsoft made the decision to make the MS-DOS program Microsoft Word mouse-compatible, and developed the first PC-compatible mouse. Microsoft's mouse shipped in 1983, thus beginning the Microsoft Hardware division of the company. However, the mouse remained relatively obscure until the appearance of the Macintosh 128K (which included an updated version of the single-button Lisa Mouse) in 1984, and of the Amiga 1000 and the Atari ST in 1985.
A mouse typically controls the motion of a pointer in two dimensions in a graphical user interface (GUI). The mouse turns movements of the hand backward and forward, left and right into equivalent electronic signals that in turn are used to move the pointer.
The relative movements of the mouse on the surface are applied to the position of the pointer on the screen, which signals the point where actions of the user take place, so hand movements are replicated by the pointer. Clicking or pointing (stopping movement while the cursor is within the bounds of an area) can select files, programs or actions from a list of names, or (in graphical interfaces) through small images called "icons" and other elements. For example, a text file might be represented by a picture of a paper notebook and clicking while the cursor points at this icon might cause a text editing program to open the file in a window.
Different ways of operating the mouse cause specific things to happen in the GUI:
The Concept of Gestural Interfaces Gestural interfaces have become an integral part of modern computing, allowing users to interact with their devices in a more intuitive and natural way. In addition to traditional pointing-and-clicking actions, users can now employ gestural inputs to issue commands or perform specific actions. These stylized motions of the mouse cursor, known as "gestures", have the potential to enhance user experience and streamline workflow.
Mouse Gestures in Action To illustrate the concept of gestural interfaces, let's consider a drawing program as an example. In this scenario, a user can employ a gesture to delete a shape on the canvas. By rapidly moving the mouse cursor in an "x" motion over the shape, the user can trigger the command to delete the selected shape. This gesture-based interaction enables users to perform actions quickly and efficiently without relying solely on traditional input methods.
Challenges and Benefits of Gestural Interfaces While gestural interfaces offer a more immersive and interactive user experience, they also present challenges. One of the primary difficulties lies in the requirement of finer motor control from users. Gestures demand precise movements, which can be more challenging for individuals with limited dexterity or those who are new to this mode of interaction.
However, despite these challenges, gestural interfaces have gained popularity due to their ability to simplify complex tasks and improve efficiency. Several gestural conventions have become widely adopted, making them more accessible to users. One such convention is the drag and drop gesture, which has become pervasive across various applications and platforms.
The Drag and Drop Gesture The drag and drop gesture is a fundamental gestural convention that enables users to manipulate objects on the screen seamlessly. It involves a series of actions performed by the user:
Pressing the mouse button while the cursor hovers over an interface object.
Moving the cursor to a different location while holding the button down.
Releasing the mouse button to complete the action.
This gesture allows users to transfer or rearrange objects effortlessly. For instance, a user can drag and drop a picture representing a file onto an image of a trash can, indicating the intention to delete the file. This intuitive and visual approach to interaction has become synonymous with organizing digital content and simplifying file management tasks.
Standard Semantic Gestures In addition to the drag and drop gesture, several other semantic gestures have emerged as standard conventions within the gestural interface paradigm. These gestures serve specific purposes and contribute to a more intuitive user experience. Some of the notable semantic gestures include:
Crossing-based goal: This gesture involves crossing a specific boundary or threshold on the screen to trigger an action or complete a task. For example, swiping across the screen to unlock a device or confirm a selection.
Menu traversal: Menu traversal gestures facilitate navigation through hierarchical menus or options. Users can perform gestures such as swiping or scrolling to explore different menu levels or activate specific commands.
Pointing: Pointing gestures involve positioning the mouse cursor over an object or element to interact with it. This fundamental gesture enables users to select, click, or access contextual menus.
Mouseover (pointing or hovering): Mouseover gestures occur when the cursor is positioned over an object without clicking. This action often triggers a visual change or displays additional information about the object, providing users with real-time feedback.
These standard semantic gestures, along with the drag and drop convention, form the building blocks of gestural interfaces, allowing users to interact with digital content using intuitive and natural movements.
At the end of 20th century Digitizer mouse with Magnifying glass was used with AutoCAD for digitizations of blue-prints.
Other uses of the mouse's input occur commonly in special application domains. In interactive three-dimensional graphics, the mouse's motion often translates directly into changes in the virtual objects' or camera's orientation. For example, in the first-person shooter genre of games (see below), players usually employ the mouse to control the direction in which the virtual player's "head" faces: moving the mouse up will cause the player to look up, revealing the view above the player's head. A related function makes an image of an object rotate so that all sides can be examined. 3D design and animation software often modally chord many different combinations to allow objects and cameras to be rotated and moved through space with the few axes of movement mice can detect.
When mice have more than one button, the software may assign different functions to each button. Often, the primary (leftmost in a right-handed configuration) button on the mouse will select items, and the secondary (rightmost in a right-handed) button will bring up a menu of alternative actions applicable to that item. For example, on platforms with more than one button, the Mozilla web browser will follow a link in response to a primary button click, will bring up a contextual menu of alternative actions for that link in response to a secondary-button click, and will often open the link in a new tab or window in response to a click with the tertiary (middle) mouse button.
The German company Telefunken published on their early ball mouse on 2 October 1968. Telefunken's mouse was sold as optional equipment for their computer systems. Bill English, builder of Engelbart's original mouse, created a ball mouse in 1972 while working for Xerox PARC.
The ball mouse replaced the external wheels with a single ball that could rotate in any direction. It came as part of the hardware package of the Xerox Alto computer. Perpendicular chopper wheels housed inside the mouse's body chopped beams of light on the way to light sensors, thus detecting in their turn the motion of the ball. This variant of the mouse resembled an inverted trackball and became the predominant form used with personal computers throughout the 1980s and 1990s. The Xerox PARC group also settled on the modern technique of using both hands to type on a full-size keyboard and grabbing the mouse when required.
The ball mouse has two freely rotating rollers. These are located 90 degrees apart. One roller detects the forward-backward motion of the mouse and the other the left-right motion. Opposite the two rollers is a third one (white, in the photo, at 45 degrees) that is spring-loaded to push the ball against the other two rollers. Each roller is on the same shaft as an encoder wheel that has slotted edges; the slots interrupt infrared light beams to generate electrical pulses that represent wheel movement. Each wheel's disc has a pair of light beams, located so that a given beam becomes interrupted or again starts to pass light freely when the other beam of the pair is about halfway between changes.
Simple logic circuits interpret the relative timing to indicate which direction the wheel is rotating. This incremental rotary encoder scheme is sometimes called quadrature encoding of the wheel rotation, as the two optical sensors produce signals that are in approximately quadrature phase. The mouse sends these signals to the computer system via the mouse cable, directly as logic signals in very old mice such as the Xerox mice, and via a data-formatting IC in modern mice. The driver software in the system converts the signals into motion of the mouse cursor along X and Y axes on the computer screen.
The ball is mostly steel, with a precision spherical rubber surface. The weight of the ball, given an appropriate working surface under the mouse, provides a reliable grip so the mouse's movement is transmitted accurately. Ball mice and wheel mice were manufactured for Xerox by Jack Hawley, doing business as The Mouse House in Berkeley, California, starting in 1975. Based on another invention by Jack Hawley, proprietor of the Mouse House, Honeywell produced another type of mechanical mouse. Instead of a ball, it had two wheels rotating at off axes. Key Tronic later produced a similar product.
Modern computer mice took form at the École Polytechnique Fédérale de Lausanne (EPFL) under the inspiration of Professor Jean-Daniel Nicoud and at the hands of engineer and watchmaker André Guignard. This new design incorporated a single hard rubber mouseball and three buttons, and remained a common design until the mainstream adoption of the scroll-wheel mouse during the 1990s. In 1985, René Sommer added a microprocessor to Nicoud's and Guignard's design. Through this innovation, Sommer is credited with inventing a significant component of the mouse, which made it more "intelligent"; though optical mice from Mouse Systems had incorporated microprocessors by 1984.
Another type of mechanical mouse, the "analog mouse" (now generally regarded as obsolete), uses potentiometers rather than encoder wheels, and is typically designed to be plug compatible with an analog joystick. The "Color Mouse", originally marketed by RadioShack for their Color Computer (but also usable on MS-DOS machines equipped with analog joystick ports, provided the software accepted joystick input) was the best-known example.
Early optical mice relied entirely on one or more light-emitting diodes (LEDs) and an imaging array of photodiodes to detect movement relative to the underlying surface, eschewing the internal moving parts a mechanical mouse uses in addition to its optics. A laser mouse is an optical mouse that uses coherent (laser) light.
The earliest optical mice detected movement on pre-printed mousepad surfaces, whereas the modern LED optical mouse works on most opaque diffuse surfaces; it is usually unable to detect movement on specular surfaces like polished stone. Laser diodes provide good resolution and precision, improving performance on opaque specular surfaces. Later, more surface-independent optical mice use an optoelectronic sensor (essentially, a tiny low-resolution video camera) to take successive images of the surface on which the mouse operates. Battery powered, wireless optical mice flash the LED intermittently to save power, and only glow steadily when movement is detected.
Often called "air mice" since they do not require a surface to operate, inertial mice use a tuning fork or other accelerometer (US Patent 4787051) to detect rotary movement for every axis supported. The most common models (manufactured by Logitech and Gyration) work using 2 degrees of rotational freedom and are insensitive to spatial translation. The user requires only small wrist rotations to move the cursor, reducing user fatigue or "gorilla arm".
Usually cordless, they often have a switch to deactivate the movement circuitry between use, allowing the user freedom of movement without affecting the cursor position. A patent for an inertial mouse claims that such mice consume less power than optically based mice, and offer increased sensitivity, reduced weight and increased ease-of-use. In combination with a wireless keyboard an inertial mouse can offer alternative ergonomic arrangements which do not require a flat work surface, potentially alleviating some types of repetitive motion injuries related to workstation posture.
A 3D mouse is a computer input device for viewport interaction with at least three degrees of freedom (DoF), e.g. in 3D computer graphics software for manipulating virtual objects, navigating in the viewport, defining camera paths, posing, and desktop motion capture. 3D mice can also be used as spatial controllers for video game interaction, e.g. SpaceOrb 360. To perform such different tasks the used transfer function and the device stiffness are essential for efficient interaction.
The virtual motion is connected to the 3D mouse control handle via a transfer function. Position control means that the virtual position and orientation is proportional to the mouse handle's deflection whereas velocity control means that translation and rotation velocity of the controlled object is proportional to the handle deflection. A further essential property of a transfer function is its interaction metaphor:
Ware and Osborne performed an experiment investigating these metaphors whereby it was shown that there is no single best metaphor. For manipulation tasks, the object-in-hand metaphor was superior, whereas for navigation tasks the camera-in-hand metaphor was superior.
Zhai used and the following three categories for device stiffness:
Logitech 3D Mouse (1990) was the first ultrasonic mouse and is an example of an isotonic 3D mouse having six degrees of freedom (6DoF). Isotonic devices have also been developed with less than 6DoF, e.g. the Inspector at Technical University of Denmark (5DoF input).
Other examples of isotonic 3D mice are motion controllers, i.e. is a type of game controller that typically uses accelerometers to track motion. Motion tracking systems are also used for motion capture e.g. in the film industry, although that these tracking systems are not 3D mice in a strict sense, because motion capture only means recording 3D motion and not 3D interaction.
Early 3D mice for velocity control were almost ideally isometric, e.g. SpaceBall 1003, 2003, 3003, and a device developed at Deutsches Zentrum für Luft und Raumfahrt (DLR), cf. US patent US4589810A.
At DLR an elastic 6DoF sensor was developed that was used in Logitech's SpaceMouse and in the products of 3DConnexion. SpaceBall 4000 FLX has a maximum deflection of approximately 3 mm (0.12 in) at a maximum force of approximately 10N, that is, a stiffness of approximately 33 N/cm (19 lbf/in). SpaceMouse has a maximum deflection of 1.5 mm (0.059 in) at a maximum force of 4.4 N (0.99 lbf), that is, a stiffness of approximately 30 N/cm (17 lbf/in). Taking this development further, the softly elastic Sundinlabs SpaceCat was developed. SpaceCat has a maximum translational deflection of approximately 15 mm (0.59 in) and maximum rotational deflection of approximately 30° at a maximum force less than 2N, that is, a stiffness of approximately 1.3 N/cm (0.74 lbf/in). With SpaceCat Sundin and Fjeld reviewed five comparative experiments performed with different device stiffness and transfer functions and performed a further study comparing 6DoF softly elastic position control with 6DoF stiffly elastic velocity control in a positioning task. They concluded that for positioning tasks position control is to be preferred over velocity control. They could further conjecture the following two types of preferred 3D mouse usage:
3DConnexion's 3D mice have been commercially successful over decades. They are used in combination with the conventional mouse for CAD. The Space Mouse is used to orient the target object or change the viewpoint with the non-dominant hand, whereas the dominant hand operates the computer mouse for conventional CAD GUI operation. This is a kind of space-multiplexed input where the 6 DoF input device acts as a graspable user interface that is always connected to the view port.
In November 2010 a German Company called Axsotic introduced a new concept of 3D mouse called 3D Spheric Mouse. This new concept of a true six degree-of-freedom input device uses a ball to rotate isometrically in 3 axes and an elastic polymer anchored tetrahedron inspired suspension for translating the ball without any limitations. A contactless sensor design uses a magnetic sensor array for sensing three aches translation and two optical mouse sensors for three aches rotation. The special tetrahedron suspension allows a user to rotate the ball with the fingers while input translations with the hand-wrist motion.
With force feedback the device stiffness can dynamically be adapted to the task just performed by the user, e.g. performing positioning tasks with less stiffness than navigation tasks.
In 2000, Logitech introduced a "tactile mouse" known as the "iFeel Mouse" developed by Immersion Corporation that contained a small actuator to enable the mouse to generate simulated physical sensations. Such a mouse can augment user-interfaces with haptic feedback, such as giving feedback when crossing a window boundary. To surf the internet by touch-enabled mouse was first developed in 1996 and first implemented commercially by the Wingman Force Feedback Mouse. It requires the user to be able to feel depth or hardness; this ability was realized with the first electrorheological tactile mice but never marketed.
Tablet digitizers are sometimes used with accessories called pucks, devices which rely on absolute positioning, but can be configured for sufficiently mouse-like relative tracking that they are sometimes marketed as mice.
As the name suggests, this type of mouse is intended to provide optimum comfort and avoid injuries such as carpal tunnel syndrome, arthritis, and other repetitive strain injuries. It is designed to fit natural hand position and movements, to reduce discomfort.
When holding a typical mouse, the ulna and radius bones on the arm are crossed. Some designs attempt to place the palm more vertically, so the bones take more natural parallel position.
Increasing mouse height and angling the mouse topcase can improve wrist posture without negatively affecting performance. Some limit wrist movement, encouraging arm movement instead, that may be less precise but more optimal from the health point of view. A mouse may be angled from the thumb downward to the opposite side – this is known to reduce wrist pronation. However such optimizations make the mouse right or left hand specific, making more problematic to change the tired hand. Time has criticized manufacturers for offering few or no left-handed ergonomic mice: "Oftentimes I felt like I was dealing with someone who'd never actually met a left-handed person before."
Another solution is a pointing bar device. The so-called roller bar mouse is positioned snugly in front of the keyboard, thus allowing bi-manual accessibility.
These mice are specifically designed for use in computer games. They typically employ a wider array of controls and buttons and have designs that differ radically from traditional mice. They may also have decorative monochrome or programmable RGB LED lighting. The additional buttons can often be used for changing the sensitivity of the mouse or they can be assigned (programmed) to macros (i.e., for opening a program or for use instead of a key combination). It is also common for game mice, especially those designed for use in real-time strategy games such as StarCraft, or in multiplayer online battle arena games such as League of Legends to have a relatively high sensitivity, measured in dots per inch (DPI), which can be as high as 25,600. DPI and CPI are the same values that refer to the mouse's sensitivity. DPI is a misnomer used in the gaming world, and many manufacturers use it to refer to CPI, counts per inch. Some advanced mice from gaming manufacturers also allow users to adjust the weight of the mouse by adding or subtracting weights to allow for easier control. Ergonomic quality is also an important factor in gaming mouse, as extended gameplay times may render further use of the mouse to be uncomfortable. Some mice have been designed to have adjustable features such as removable and/or elongated palm rests, horizontally adjustable thumb rests and pinky rests. Some mice may include several different rests with their products to ensure comfort for a wider range of target consumers. Gaming mice are held by gamers in three styles of grip:
To transmit their input, typical cabled mice use a thin electrical cord terminating in a standard connector, such as RS-232C, PS/2, ADB, or USB. Cordless mice instead transmit data via infrared radiation (see IrDA) or radio (including Bluetooth), although many such cordless interfaces are themselves connected through the aforementioned wired serial buses.
While the electrical interface and the format of the data transmitted by commonly available mice is currently standardized on USB, in the past it varied between different manufacturers. A bus mouse used a dedicated interface card for connection to an IBM PC or compatible computer.
Mouse use in DOS applications became more common after the introduction of the Microsoft Mouse, largely because Microsoft provided an open standard for communication between applications and mouse driver software. Thus, any application written to use the Microsoft standard could use a mouse with a driver that implements the same API, even if the mouse hardware itself was incompatible with Microsoft's. This driver provides the state of the buttons and the distance the mouse has moved in units that its documentation calls "mickeys".
In the 1970s, the Xerox Alto mouse, and in the 1980s the Xerox optical mouse, used a quadrature-encoded X and Y interface. This two-bit encoding per dimension had the property that only one bit of the two would change at a time, like a Gray code or Johnson counter, so that the transitions would not be misinterpreted when asynchronously sampled.
The earliest mass-market mice, such as on the original Macintosh, Amiga, and Atari ST mice used a D-subminiature 9-pin connector to send the quadrature-encoded X and Y axis signals directly, plus one pin per mouse button. The mouse was a simple optomechanical device, and the decoding circuitry was all in the main computer.
The DE-9 connectors were designed to be electrically compatible with the joysticks popular on numerous 8-bit systems, such as the Commodore 64 and the Atari 2600. Although the ports could be used for both purposes, the signals must be interpreted differently. As a result, plugging a mouse into a joystick port causes the "joystick" to continuously move in some direction, even if the mouse stays still, whereas plugging a joystick into a mouse port causes the "mouse" to only be able to move a single pixel in each direction.
Because the IBM PC did not have a quadrature decoder built in, early PC mice used the RS-232C serial port to communicate encoded mouse movements, as well as provide power to the mouse's circuits. The Mouse Systems Corporation version used a five-byte protocol and supported three buttons. The Microsoft version used a three-byte protocol and supported two buttons. Due to the incompatibility between the two protocols, some manufacturers sold serial mice with a mode switch: "PC" for MSC mode, "MS" for Microsoft mode.
In 1986 Apple first implemented the Apple Desktop Bus allowing the daisy chaining of up to 16 devices, including mice and other devices on the same bus with no configuration whatsoever. Featuring only a single data pin, the bus used a purely polled approach to device communications and survived as the standard on mainstream models (including a number of non-Apple workstations) until 1998 when Apple's iMac line of computers joined the industry-wide switch to using USB. Beginning with the Bronze Keyboard PowerBook G3 in May 1999, Apple dropped the external ADB port in favor of USB, but retained an internal ADB connection in the PowerBook G4 for communication with its built-in keyboard and trackpad until early 2005.
With the arrival of the IBM PS/2 personal-computer series in 1987, IBM introduced the eponymous PS/2 port for mice and keyboards, which other manufacturers rapidly adopted. The most visible change was the use of a round 6-pin mini-DIN, in lieu of the former 5-pin MIDI style full sized DIN 41524 connector. In default mode (called stream mode) a PS/2 mouse communicates motion, and the state of each button, by means of 3-byte packets. For any motion, button press or button release event, a PS/2 mouse sends, over a bi-directional serial port, a sequence of three bytes, with the following format:
Here, XS and YS represent the sign bits of the movement vectors, XV and YV indicate an overflow in the respective vector component, and LB, MB and RB indicate the status of the left, middle and right mouse buttons (1 = pressed). PS/2 mice also understand several commands for reset and self-test, switching between different operating modes, and changing the resolution of the reported motion vectors.
A Microsoft IntelliMouse relies on an extension of the PS/2 protocol: the ImPS/2 or IMPS/2 protocol (the abbreviation combines the concepts of "IntelliMouse" and "PS/2"). It initially operates in standard PS/2 format, for backward compatibility. After the host sends a special command sequence, it switches to an extended format in which a fourth byte carries information about wheel movements. The IntelliMouse Explorer works analogously, with the difference that its 4-byte packets also allow for two additional buttons (for a total of five).
Mouse vendors also use other extended formats, often without providing public documentation. The Typhoon mouse uses 6-byte packets which can appear as a sequence of two standard 3-byte packets, such that an ordinary PS/2 driver can handle them. For 3-D (or 6-degree-of-freedom) input, vendors have made many extensions both to the hardware and to software. In the late 1990s, Logitech created ultrasound based tracking which gave 3D input to a few millimeters accuracy, which worked well as an input device but failed as a profitable product. In 2008, Motion4U introduced its "OptiBurst" system using IR tracking for use as a Maya (graphics software) plugin.
Almost all wired mice today use USB and the USB human interface device class for communication.
Cordless or wireless mice transmit data via radio. Some mice connect to the computer through Bluetooth or Wi-Fi, while others use a receiver that plugs into the computer, for example through a USB port.
Many mice that use a USB receiver have a storage compartment for it inside the mouse. Some "nano receivers" are designed to be small enough to remain plugged into a laptop during transport, while still being large enough to easily remove.
MS-DOS and Windows 1.0 support connecting a mouse such as a Microsoft Mouse via multiple interfaces: BallPoint, Bus (InPort), Serial port or PS/2.
Windows 98 added built-in support for USB Human Interface Device class (USB HID), with native vertical scrolling support. Windows 2000 and Windows Me expanded this built-in support to 5-button mice.
Windows XP Service Pack 2 introduced a Bluetooth stack, allowing Bluetooth mice to be used without any USB receivers. Windows Vista added native support for horizontal scrolling and standardized wheel movement granularity for finer scrolling.
Windows 8 introduced BLE (Bluetooth Low Energy) mouse/HID support.
Some systems allow two or more mice to be used at once as input devices. Late-1980s era home computers such as the Amiga used this to allow computer games with two players interacting on the same computer (Lemmings and The Settlers for example). The same idea is sometimes used in collaborative software, e.g. to simulate a whiteboard that multiple users can draw on without passing a single mouse around.
Microsoft Windows, since Windows 98, has supported multiple simultaneous pointing devices. Because Windows only provides a single screen cursor, using more than one device at the same time requires cooperation of users or applications designed for multiple input devices.
Multiple mice are often used in multi-user gaming in addition to specially designed devices that provide several input interfaces.
Windows also has full support for multiple input/mouse configurations for multi-user environments.
Starting with Windows XP, Microsoft introduced an SDK for developing applications that allow multiple input devices to be used at the same time with independent cursors and independent input points. However, it no longer appears to be available.
The introduction of Windows Vista and Microsoft Surface (now known as Microsoft PixelSense) introduced a new set of input APIs that were adopted into Windows 7, allowing for 50 points/cursors, all controlled by independent users. The new input points provide traditional mouse input; however, they were designed with other input technologies like touch and image in mind. They inherently offer 3D coordinates along with pressure, size, tilt, angle, mask, and even an image bitmap to see and recognize the input point/object on the screen.
As of 2009, Linux distributions and other operating systems that use X.Org, such as OpenSolaris and FreeBSD, support 255 cursors/input points through Multi-Pointer X. However, currently no window managers support Multi-Pointer X leaving it relegated to custom software usage.
There have also been propositions of having a single operator use two mice simultaneously as a more sophisticated means of controlling various graphics and multimedia applications.
Mouse buttons are microswitches which can be pressed to select or interact with an element of a graphical user interface, producing a distinctive clicking sound.
Since around the late 1990s, the three-button scrollmouse has become the de facto standard. Users most commonly employ the second button to invoke a contextual menu in the computer's software user interface, which contains options specifically tailored to the interface element over which the mouse cursor currently sits. By default, the primary mouse button sits located on the left-hand side of the mouse, for the benefit of right-handed users; left-handed users can usually reverse this configuration via software.
Nearly all mice now have an integrated input primarily intended for scrolling on top, usually a single-axis digital wheel or rocker switch which can also be depressed to act as a third button. Though less common, many mice instead have two-axis inputs such as a tiltable wheel, trackball, or touchpad. Those with a trackball may be designed to stay stationary, using the trackball instead of moving the mouse.
Mickeys per second is a unit of measurement for the speed and movement direction of a computer mouse, where direction is often expressed as "horizontal" versus "vertical" mickey count. However, speed can also refer to the ratio between how many pixels the cursor moves on the screen and how far the mouse moves on the mouse pad, which may be expressed as pixels per mickey, pixels per inch, or pixels per centimeter.
The computer industry often measures mouse sensitivity in terms of counts per inch (CPI), commonly expressed as dots per inch (DPI) – the number of steps the mouse will report when it moves one inch. In early mice, this specification was called pulses per inch (ppi). The mickey originally referred to one of these counts, or one resolvable step of motion. If the default mouse-tracking condition involves moving the cursor by one screen-pixel or dot on-screen per reported step, then the CPI does equate to DPI: dots of cursor motion per inch of mouse motion. The CPI or DPI as reported by manufacturers depends on how they make the mouse; the higher the CPI, the faster the cursor moves with mouse movement. However, software can adjust the mouse sensitivity, making the cursor move faster or slower than its CPI. As of 2007, software can change the speed of the cursor dynamically, taking into account the mouse's absolute speed and the movement from the last stop-point. In most software, an example being the Windows platforms, this setting is named "speed", referring to "cursor precision". However, some operating systems name this setting "acceleration", the typical Apple OS designation. This term is incorrect. Mouse acceleration in most mouse software refers to the change in speed of the cursor over time while the mouse movement is constant.
For simple software, when the mouse starts to move, the software will count the number of "counts" or "mickeys" received from the mouse and will move the cursor across the screen by that number of pixels (or multiplied by a rate factor, typically less than 1). The cursor will move slowly on the screen, with good precision. When the movement of the mouse passes the value set for some threshold, the software will start to move the cursor faster, with a greater rate factor. Usually, the user can set the value of the second rate factor by changing the "acceleration" setting.
Operating systems sometimes apply acceleration, referred to as "ballistics", to the motion reported by the mouse. For example, versions of Windows prior to Windows XP doubled reported values above a configurable threshold, and then optionally doubled them again above a second configurable threshold. These doublings applied separately in the X and Y directions, resulting in very nonlinear response.
Engelbart's original mouse did not require a mousepad; the mouse had two large wheels which could roll on virtually any surface. However, most subsequent mechanical mice starting with the steel roller ball mouse have required a mousepad for optimal performance.
The mousepad, the most common mouse accessory, appears most commonly in conjunction with mechanical mice, because to roll smoothly the ball requires more friction than common desk surfaces usually provide. So-called "hard mousepads" for gamers or optical/laser mice also exist.
Most optical and laser mice do not require a pad, the notable exception being early optical mice which relied on a grid on the pad to detect movement (e.g. Mouse Systems). Whether to use a hard or soft mousepad with an optical mouse is largely a matter of personal preference. One exception occurs when the desk surface creates problems for the optical or laser tracking, for example, a transparent or reflective surface, such as glass.
Some mice also come with small "pads" attached to the bottom surface, also called mouse feet or mouse skates, that help the user slide the mouse smoothly across surfaces.
Around 1981, Xerox included mice with its Xerox Star, based on the mouse used in the 1970s on the Alto computer at Xerox PARC. Sun Microsystems, Symbolics, Lisp Machines Inc., and Tektronix also shipped workstations with mice, starting in about 1981. Later, inspired by the Star, Apple Computer released the Apple Lisa, which also used a mouse. However, none of these products achieved large-scale success. Only with the release of the Apple Macintosh in 1984 did the mouse see widespread use.
The Macintosh design, commercially successful and technically influential, led many other vendors to begin producing mice or including them with their other computer products (by 1986, Atari ST, Amiga, Windows 1.0, GEOS for the Commodore 64, and the Apple IIGS).
The widespread adoption of graphical user interfaces in the software of the 1980s and 1990s made mice all but indispensable for controlling computers. In November 2008, Logitech built their billionth mouse.
The device often functions as an interface for PC-based computer games and sometimes for video game consoles. The Classic Mac OS Desk Accessory Puzzle in 1984 was the first game designed specifically for a mouse.
FPSs naturally lend themselves to separate and simultaneous control of the player's movement and aim, and on computers this has traditionally been achieved with a combination of keyboard and mouse. Players use the X-axis of the mouse for looking (or turning) left and right, and the Y-axis for looking up and down; the keyboard is used for movement and supplemental inputs.
Many shooting genre players prefer a mouse over a gamepad analog stick because the wide range of motion offered by a mouse allows for faster and more varied control. Although an analog stick allows the player more granular control, it is poor for certain movements, as the player's input is relayed based on a vector of both the stick's direction and magnitude. Thus, a small but fast movement (known as "flick-shotting") using a gamepad requires the player to quickly move the stick from its rest position to the edge and back again in quick succession, a difficult maneuver. In addition the stick also has a finite magnitude; if the player is currently using the stick to move at a non-zero velocity their ability to increase the rate of movement of the camera is further limited based on the position their displaced stick was already at before executing the maneuver. The effect of this is that a mouse is well suited not only to small, precise movements but also to large, quick movements and immediate, responsive movements; all of which are important in shooter gaming. This advantage also extends in varying degrees to similar game styles such as third-person shooters.
Some incorrectly ported games or game engines have acceleration and interpolation curves which unintentionally produce excessive, irregular, or even negative acceleration when used with a mouse instead of their native platform's non-mouse default input device. Depending on how deeply hardcoded this misbehavior is, internal user patches or external 3rd-party software may be able to fix it. Individual game engines will also have their own sensitivities. This often restricts one from taking a game's existing sensitivity, transferring it to another, and acquiring the same 360 rotational measurements. A sensitivity converter is required in order to translate rotational movements properly.
Due to their similarity to the WIMP desktop metaphor interface for which mice were originally designed, and to their own tabletop game origins, computer strategy games are most commonly played with mice. In particular, real-time strategy and MOBA games usually require the use of a mouse.
The left button usually controls primary fire. If the game supports multiple fire modes, the right button often provides secondary fire from the selected weapon. Games with only a single fire mode will generally map secondary fire to aim down the weapon sights. In some games, the right button may also invoke accessories for a particular weapon, such as allowing access to the scope of a sniper rifle or allowing the mounting of a bayonet or silencer.
Players can use a scroll wheel for changing weapons (or for controlling scope-zoom magnification, in older games). On most first person shooter games, programming may also assign more functions to additional buttons on mice with more than three controls. A keyboard usually controls movement (for example, WASD for moving forward, left, backward, and right, respectively) and other functions such as changing posture. Since the mouse serves for aiming, a mouse that tracks movement accurately and with less lag (latency) will give a player an advantage over players with less accurate or slower mice. In some cases the right mouse button may be used to move the player forward, either in lieu of, or in conjunction with the typical WASD configuration.
Many games provide players with the option of mapping their own choice of a key or button to a certain control. An early technique of players, circle strafing, saw a player continuously strafing while aiming and shooting at an opponent by walking in circle around the opponent with the opponent at the center of the circle. Players could achieve this by holding down a key for strafing while continuously aiming the mouse toward the opponent.
Games using mice for input are so popular that many manufacturers make mice specifically for gaming. Such mice may feature adjustable weights, high-resolution optical or laser components, additional buttons, ergonomic shape, and other features such as adjustable CPI. Mouse Bungees are typically used with gaming mice because it eliminates the annoyance of the cable.
Many games, such as first- or third-person shooters, have a setting named "invert mouse" or similar (not to be confused with "button inversion", sometimes performed by left-handed users) which allows the user to look downward by moving the mouse forward and upward by moving the mouse backward (the opposite of non-inverted movement). This control system resembles that of aircraft control sticks, where pulling back causes pitch up and pushing forward causes pitch down; computer joysticks also typically emulate this control-configuration.
After id Software's commercial hit of Doom, which did not support vertical aiming, competitor Bungie's Marathon became the first first-person shooter to support using the mouse to aim up and down. Games using the Build engine had an option to invert the Y-axis. The "invert" feature actually made the mouse behave in a manner that users now regard as non-inverted (by default, moving mouse forward resulted in looking down). Soon after, id Software released Quake, which introduced the invert feature as users now know it.
In 1988, the VTech Socrates educational video game console featured a wireless mouse with an attached mouse pad as an optional controller used for some games. In the early 1990s, the Super Nintendo Entertainment System video game system featured a mouse in addition to its controllers. A mouse was also released for the Nintendo 64, although it was only released in Japan. The 1992 game Mario Paint in particular used the mouse's capabilities, as did its Japanese-only successor Mario Artist on the N64 for its 64DD disk drive peripheral in 1999. Sega released official mice for their Genesis/Mega Drive, Saturn and Dreamcast consoles. NEC sold official mice for its PC Engine and PC-FX consoles. Sony released an official mouse product for the PlayStation console, included one along with the Linux for PlayStation 2 kit, as well as allowing owners to use virtually any USB mouse with the PS2, PS3, and PS4. Nintendo's Wii also had this feature implemented in a later software update, and this support was retained on its successor, the Wii U. Microsoft's Xbox line of game consoles (which used operaring systems based on modified versions of Windows NT) also had universal-wide mouse support using USB. | [
{
"paragraph_id": 0,
"text": "A computer mouse (plural mice, also mouses) is a hand-held pointing device that detects two-dimensional motion relative to a surface. This motion is typically translated into the motion of the pointer (called a cursor) on a display, which allows a smooth control of the graphical user interface of a computer.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The first public demonstration of a mouse controlling a computer system was done by Doug Engelbart in 1968 as part of the Mother of All Demos. Mice originally used two separate wheels to directly track movement across a surface: one in the x-dimension and one in the Y. Later, the standard design shifted to use a ball rolling on a surface to detect motion, in turn connected to internal rollers. Most modern mice use optical movement detection with no moving parts. Though originally all mice were connected to a computer by a cable, many modern mice are cordless, relying on short-range radio communication with the connected system.",
"title": ""
},
{
"paragraph_id": 2,
"text": "In addition to moving a cursor, computer mice have one or more buttons to allow operations such as the selection of a menu item on a display. Mice often also feature other elements, such as touch surfaces and scroll wheels, which enable additional control and dimensional input.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The earliest known written use of the term mouse or mice in reference to a computer pointing device is in Bill English's July 1965 publication, \"Computer-Aided Display Control\". This likely originated from its resemblance to the shape and size of a mouse, with the cord resembling its tail. The popularity of wireless mice without cords makes the resemblance less obvious.",
"title": "Etymology"
},
{
"paragraph_id": 4,
"text": "According to Roger Bates, a hardware designer under English, the term also came about because the cursor on the screen was for some unknown reason referred to as \"CAT\" and was seen by the team as if it would be chasing the new desktop device.",
"title": "Etymology"
},
{
"paragraph_id": 5,
"text": "The plural for the small rodent is always \"mice\" in modern usage. The plural for a computer mouse is either \"mice\" or \"mouses\" according to most dictionaries, with \"mice\" being more common. The first recorded plural usage is \"mice\"; the online Oxford Dictionaries cites a 1984 use, and earlier uses include J. C. R. Licklider's \"The Computer as a Communication Device\" of 1968.",
"title": "Etymology"
},
{
"paragraph_id": 6,
"text": "The trackball, a related pointing device, was invented in 1946 by Ralph Benjamin as part of a post-World War II-era fire-control radar plotting system called the Comprehensive Display System (CDS). Benjamin was then working for the British Royal Navy Scientific Service. Benjamin's project used analog computers to calculate the future position of target aircraft based on several initial input points provided by a user with a joystick. Benjamin felt that a more elegant input device was needed and invented what they called a \"roller ball\" for this purpose.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "The device was patented in 1947, but only a prototype using a metal ball rolling on two rubber-coated wheels was ever built, and the device was kept as a military secret.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "Another early trackball was built by Kenyon Taylor, a British electrical engineer working in collaboration with Tom Cranston and Fred Longstaff. Taylor was part of the original Ferranti Canada, working on the Royal Canadian Navy's DATAR (Digital Automated Tracking and Resolving) system in 1952.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "DATAR was similar in concept to Benjamin's display. The trackball used four disks to pick up motion, two each for the X and Y directions. Several rollers provided mechanical support. When the ball was rolled, the pickup discs spun and contacts on their outer rim made periodic contact with wires, producing pulses of output with each movement of the ball. By counting the pulses, the physical movement of the ball could be determined. A digital computer calculated the tracks and sent the resulting data to other ships in a task force using pulse-code modulation radio signals. This trackball used a standard Canadian five-pin bowling ball. It was not patented, since it was a secret military project.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "",
"title": "History"
},
{
"paragraph_id": 11,
"text": "Douglas Engelbart of the Stanford Research Institute (now SRI International) has been credited in published books by Thierry Bardini, Paul Ceruzzi, Howard Rheingold, and several others as the inventor of the computer mouse. Engelbart was also recognized as such in various obituary titles after his death in July 2013.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "By 1963, Engelbart had already established a research lab at SRI, the Augmentation Research Center (ARC), to pursue his objective of developing both hardware and software computer technology to \"augment\" human intelligence. That November, while attending a conference on computer graphics in Reno, Nevada, Engelbart began to ponder how to adapt the underlying principles of the planimeter to inputting X- and Y-coordinate data. On 14 November 1963, he first recorded his thoughts in his personal notebook about something he initially called a \"bug\", which is a \"3-point\" form could have a \"drop point and 2 orthogonal wheels\". He wrote that the \"bug\" would be \"easier\" and \"more natural\" to use, and unlike a stylus, it would stay still when let go, which meant it would be \"much better for coordination with the keyboard\".",
"title": "History"
},
{
"paragraph_id": 13,
"text": "In 1964, Bill English joined ARC, where he helped Engelbart build the first mouse prototype. They christened the device the mouse as early models had a cord attached to the rear part of the device which looked like a tail, and in turn, resembled the common mouse. According to Roger Bates, a hardware designer in English, another reason for choosing this name was because the cursor on the screen was also referred to as \"CAT\" at this time.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "As noted above, this \"mouse\" was first mentioned in print in a July 1965 report, on which English was the lead author. On 9 December 1968, Engelbart publicly demonstrated the mouse at what would come to be known as The Mother of All Demos. Engelbart never received any royalties for it, as his employer SRI held the patent, which expired before the mouse became widely used in personal computers. In any event, the invention of the mouse was just a small part of Engelbart's much larger project of augmenting human intellect.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "Several other experimental pointing-devices developed for Engelbart's oN-Line System (NLS) exploited different body movements – for example, head-mounted devices attached to the chin or nose – but ultimately the mouse won out because of its speed and convenience. The first mouse, a bulky device (pictured) used two potentiometers perpendicular to each other and connected to wheels: the rotation of each wheel translated into motion along one axis. At the time of the \"Mother of All Demos\", Engelbart's group had been using their second-generation, 3-button mouse for about a year.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "",
"title": "History"
},
{
"paragraph_id": 17,
"text": "On 2 October 1968, three years after Engelbart's prototype but more than two months before his public demo, a mouse device named Rollkugelsteuerung (German for \"Trackball control\") was shown in a sales brochure by the German company AEG-Telefunken as an optional input device for the SIG 100 vector graphics terminal, part of the system around their process computer TR 86 and the TR 440 [de] main frame. Based on an even earlier trackball device, the mouse device had been developed by the company in 1966 in what had been a parallel and independent discovery. As the name suggests and unlike Engelbart's mouse, the Telefunken model already had a ball (diameter 40 mm, weight 40 g) and two mechanical 4-bit rotational position transducers with Gray code-like states, allowing easy movement in any direction. The bits remained stable for at least two successive states to relax debouncing requirements. This arrangement was chosen so that the data could also be transmitted to the TR 86 front-end process computer and over longer distance telex lines with c. 50 baud. Weighing 465 grams (16.4 oz), the device with a total height of about 7 cm (2.8 in) came in a c. 12 cm (4.7 in) diameter hemispherical injection-molded thermoplastic casing featuring one central push button.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "As noted above, the device was based on an earlier trackball-like device (also named Rollkugel) that was embedded into radar flight control desks. This trackball had been originally developed by a team led by Rainer Mallebrein [de] at Telefunken Konstanz for the German Bundesanstalt für Flugsicherung [de] (Federal Air Traffic Control). It was part of the corresponding workstation system SAP 300 and the terminal SIG 3001, which had been designed and developed since 1963. Development for the TR 440 main frame began in 1965. This led to the development of the TR 86 process computer system with its SIG 100-86 terminal. Inspired by a discussion with a university customer, Mallebrein came up with the idea of \"reversing\" the existing Rollkugel trackball into a moveable mouse-like device in 1966, so that customers did not have to be bothered with mounting holes for the earlier trackball device. The device was finished in early 1968, and together with light pens and trackballs, it was commercially offered as an optional input device for their system starting later that year. Not all customers opted to buy the device, which added costs of DM 1,500 per piece to the already up to 20-million DM deal for the main frame, of which only a total of 46 systems were sold or leased. They were installed at more than 20 German universities including RWTH Aachen, Technical University Berlin, University of Stuttgart and Konstanz. Several Rollkugel mice installed at the Leibniz Supercomputing Centre in Munich in 1972 are well preserved in a museum, two others survived in a museum at Stuttgart University, two in Hamburg, the one from Aachen at the Computer History Museum in the US, and yet another sample was recently donated to the Heinz Nixdorf MuseumsForum (HNF) in Paderborn. Anecdotal reports claim that Telefunken's attempt to patent the device was rejected by the German Patent Office due to lack of inventiveness. For the air traffic control system, the Mallebrein team had already developed a precursor to touch screens in form of an ultrasonic-curtain-based pointing device in front of the display. In 1970, they developed a device named \"Touchinput-Einrichtung\" (\"touch input device\") based on a conductively coated glass screen.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "The Xerox Alto was one of the first computers designed for individual use in 1973 and is regarded as the first modern computer to use a mouse. Inspired by PARC's Alto, the Lilith, a computer which had been developed by a team around Niklaus Wirth at ETH Zürich between 1978 and 1980, provided a mouse as well. The third marketed version of an integrated mouse shipped as a part of a computer and intended for personal computer navigation came with the Xerox 8010 Star in 1981.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "By 1982, the Xerox 8010 was probably the best-known computer with a mouse. The Sun-1 also came with a mouse, and the forthcoming Apple Lisa was rumored to use one, but the peripheral remained obscure; Jack Hawley of The Mouse House reported that one buyer for a large organization believed at first that his company sold lab mice. Hawley, who manufactured mice for Xerox, stated that \"Practically, I have the market all to myself right now\"; a Hawley mouse cost $415. In 1982, Logitech introduced the P4 Mouse at the Comdex trade show in Las Vegas, its first hardware mouse. That same year Microsoft made the decision to make the MS-DOS program Microsoft Word mouse-compatible, and developed the first PC-compatible mouse. Microsoft's mouse shipped in 1983, thus beginning the Microsoft Hardware division of the company. However, the mouse remained relatively obscure until the appearance of the Macintosh 128K (which included an updated version of the single-button Lisa Mouse) in 1984, and of the Amiga 1000 and the Atari ST in 1985.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "A mouse typically controls the motion of a pointer in two dimensions in a graphical user interface (GUI). The mouse turns movements of the hand backward and forward, left and right into equivalent electronic signals that in turn are used to move the pointer.",
"title": "Operation"
},
{
"paragraph_id": 22,
"text": "The relative movements of the mouse on the surface are applied to the position of the pointer on the screen, which signals the point where actions of the user take place, so hand movements are replicated by the pointer. Clicking or pointing (stopping movement while the cursor is within the bounds of an area) can select files, programs or actions from a list of names, or (in graphical interfaces) through small images called \"icons\" and other elements. For example, a text file might be represented by a picture of a paper notebook and clicking while the cursor points at this icon might cause a text editing program to open the file in a window.",
"title": "Operation"
},
{
"paragraph_id": 23,
"text": "Different ways of operating the mouse cause specific things to happen in the GUI:",
"title": "Operation"
},
{
"paragraph_id": 24,
"text": "The Concept of Gestural Interfaces Gestural interfaces have become an integral part of modern computing, allowing users to interact with their devices in a more intuitive and natural way. In addition to traditional pointing-and-clicking actions, users can now employ gestural inputs to issue commands or perform specific actions. These stylized motions of the mouse cursor, known as \"gestures\", have the potential to enhance user experience and streamline workflow.",
"title": "Operation"
},
{
"paragraph_id": 25,
"text": "Mouse Gestures in Action To illustrate the concept of gestural interfaces, let's consider a drawing program as an example. In this scenario, a user can employ a gesture to delete a shape on the canvas. By rapidly moving the mouse cursor in an \"x\" motion over the shape, the user can trigger the command to delete the selected shape. This gesture-based interaction enables users to perform actions quickly and efficiently without relying solely on traditional input methods.",
"title": "Operation"
},
{
"paragraph_id": 26,
"text": "Challenges and Benefits of Gestural Interfaces While gestural interfaces offer a more immersive and interactive user experience, they also present challenges. One of the primary difficulties lies in the requirement of finer motor control from users. Gestures demand precise movements, which can be more challenging for individuals with limited dexterity or those who are new to this mode of interaction.",
"title": "Operation"
},
{
"paragraph_id": 27,
"text": "However, despite these challenges, gestural interfaces have gained popularity due to their ability to simplify complex tasks and improve efficiency. Several gestural conventions have become widely adopted, making them more accessible to users. One such convention is the drag and drop gesture, which has become pervasive across various applications and platforms.",
"title": "Operation"
},
{
"paragraph_id": 28,
"text": "The Drag and Drop Gesture The drag and drop gesture is a fundamental gestural convention that enables users to manipulate objects on the screen seamlessly. It involves a series of actions performed by the user:",
"title": "Operation"
},
{
"paragraph_id": 29,
"text": "Pressing the mouse button while the cursor hovers over an interface object.",
"title": "Operation"
},
{
"paragraph_id": 30,
"text": "Moving the cursor to a different location while holding the button down.",
"title": "Operation"
},
{
"paragraph_id": 31,
"text": "Releasing the mouse button to complete the action.",
"title": "Operation"
},
{
"paragraph_id": 32,
"text": "This gesture allows users to transfer or rearrange objects effortlessly. For instance, a user can drag and drop a picture representing a file onto an image of a trash can, indicating the intention to delete the file. This intuitive and visual approach to interaction has become synonymous with organizing digital content and simplifying file management tasks.",
"title": "Operation"
},
{
"paragraph_id": 33,
"text": "Standard Semantic Gestures In addition to the drag and drop gesture, several other semantic gestures have emerged as standard conventions within the gestural interface paradigm. These gestures serve specific purposes and contribute to a more intuitive user experience. Some of the notable semantic gestures include:",
"title": "Operation"
},
{
"paragraph_id": 34,
"text": "Crossing-based goal: This gesture involves crossing a specific boundary or threshold on the screen to trigger an action or complete a task. For example, swiping across the screen to unlock a device or confirm a selection.",
"title": "Operation"
},
{
"paragraph_id": 35,
"text": "Menu traversal: Menu traversal gestures facilitate navigation through hierarchical menus or options. Users can perform gestures such as swiping or scrolling to explore different menu levels or activate specific commands.",
"title": "Operation"
},
{
"paragraph_id": 36,
"text": "Pointing: Pointing gestures involve positioning the mouse cursor over an object or element to interact with it. This fundamental gesture enables users to select, click, or access contextual menus.",
"title": "Operation"
},
{
"paragraph_id": 37,
"text": "Mouseover (pointing or hovering): Mouseover gestures occur when the cursor is positioned over an object without clicking. This action often triggers a visual change or displays additional information about the object, providing users with real-time feedback.",
"title": "Operation"
},
{
"paragraph_id": 38,
"text": "These standard semantic gestures, along with the drag and drop convention, form the building blocks of gestural interfaces, allowing users to interact with digital content using intuitive and natural movements.",
"title": "Operation"
},
{
"paragraph_id": 39,
"text": "At the end of 20th century Digitizer mouse with Magnifying glass was used with AutoCAD for digitizations of blue-prints.",
"title": "Operation"
},
{
"paragraph_id": 40,
"text": "Other uses of the mouse's input occur commonly in special application domains. In interactive three-dimensional graphics, the mouse's motion often translates directly into changes in the virtual objects' or camera's orientation. For example, in the first-person shooter genre of games (see below), players usually employ the mouse to control the direction in which the virtual player's \"head\" faces: moving the mouse up will cause the player to look up, revealing the view above the player's head. A related function makes an image of an object rotate so that all sides can be examined. 3D design and animation software often modally chord many different combinations to allow objects and cameras to be rotated and moved through space with the few axes of movement mice can detect.",
"title": "Operation"
},
{
"paragraph_id": 41,
"text": "When mice have more than one button, the software may assign different functions to each button. Often, the primary (leftmost in a right-handed configuration) button on the mouse will select items, and the secondary (rightmost in a right-handed) button will bring up a menu of alternative actions applicable to that item. For example, on platforms with more than one button, the Mozilla web browser will follow a link in response to a primary button click, will bring up a contextual menu of alternative actions for that link in response to a secondary-button click, and will often open the link in a new tab or window in response to a click with the tertiary (middle) mouse button.",
"title": "Operation"
},
{
"paragraph_id": 42,
"text": "The German company Telefunken published on their early ball mouse on 2 October 1968. Telefunken's mouse was sold as optional equipment for their computer systems. Bill English, builder of Engelbart's original mouse, created a ball mouse in 1972 while working for Xerox PARC.",
"title": "Types"
},
{
"paragraph_id": 43,
"text": "The ball mouse replaced the external wheels with a single ball that could rotate in any direction. It came as part of the hardware package of the Xerox Alto computer. Perpendicular chopper wheels housed inside the mouse's body chopped beams of light on the way to light sensors, thus detecting in their turn the motion of the ball. This variant of the mouse resembled an inverted trackball and became the predominant form used with personal computers throughout the 1980s and 1990s. The Xerox PARC group also settled on the modern technique of using both hands to type on a full-size keyboard and grabbing the mouse when required.",
"title": "Types"
},
{
"paragraph_id": 44,
"text": "The ball mouse has two freely rotating rollers. These are located 90 degrees apart. One roller detects the forward-backward motion of the mouse and the other the left-right motion. Opposite the two rollers is a third one (white, in the photo, at 45 degrees) that is spring-loaded to push the ball against the other two rollers. Each roller is on the same shaft as an encoder wheel that has slotted edges; the slots interrupt infrared light beams to generate electrical pulses that represent wheel movement. Each wheel's disc has a pair of light beams, located so that a given beam becomes interrupted or again starts to pass light freely when the other beam of the pair is about halfway between changes.",
"title": "Types"
},
{
"paragraph_id": 45,
"text": "Simple logic circuits interpret the relative timing to indicate which direction the wheel is rotating. This incremental rotary encoder scheme is sometimes called quadrature encoding of the wheel rotation, as the two optical sensors produce signals that are in approximately quadrature phase. The mouse sends these signals to the computer system via the mouse cable, directly as logic signals in very old mice such as the Xerox mice, and via a data-formatting IC in modern mice. The driver software in the system converts the signals into motion of the mouse cursor along X and Y axes on the computer screen.",
"title": "Types"
},
{
"paragraph_id": 46,
"text": "The ball is mostly steel, with a precision spherical rubber surface. The weight of the ball, given an appropriate working surface under the mouse, provides a reliable grip so the mouse's movement is transmitted accurately. Ball mice and wheel mice were manufactured for Xerox by Jack Hawley, doing business as The Mouse House in Berkeley, California, starting in 1975. Based on another invention by Jack Hawley, proprietor of the Mouse House, Honeywell produced another type of mechanical mouse. Instead of a ball, it had two wheels rotating at off axes. Key Tronic later produced a similar product.",
"title": "Types"
},
{
"paragraph_id": 47,
"text": "Modern computer mice took form at the École Polytechnique Fédérale de Lausanne (EPFL) under the inspiration of Professor Jean-Daniel Nicoud and at the hands of engineer and watchmaker André Guignard. This new design incorporated a single hard rubber mouseball and three buttons, and remained a common design until the mainstream adoption of the scroll-wheel mouse during the 1990s. In 1985, René Sommer added a microprocessor to Nicoud's and Guignard's design. Through this innovation, Sommer is credited with inventing a significant component of the mouse, which made it more \"intelligent\"; though optical mice from Mouse Systems had incorporated microprocessors by 1984.",
"title": "Types"
},
{
"paragraph_id": 48,
"text": "Another type of mechanical mouse, the \"analog mouse\" (now generally regarded as obsolete), uses potentiometers rather than encoder wheels, and is typically designed to be plug compatible with an analog joystick. The \"Color Mouse\", originally marketed by RadioShack for their Color Computer (but also usable on MS-DOS machines equipped with analog joystick ports, provided the software accepted joystick input) was the best-known example.",
"title": "Types"
},
{
"paragraph_id": 49,
"text": "Early optical mice relied entirely on one or more light-emitting diodes (LEDs) and an imaging array of photodiodes to detect movement relative to the underlying surface, eschewing the internal moving parts a mechanical mouse uses in addition to its optics. A laser mouse is an optical mouse that uses coherent (laser) light.",
"title": "Types"
},
{
"paragraph_id": 50,
"text": "The earliest optical mice detected movement on pre-printed mousepad surfaces, whereas the modern LED optical mouse works on most opaque diffuse surfaces; it is usually unable to detect movement on specular surfaces like polished stone. Laser diodes provide good resolution and precision, improving performance on opaque specular surfaces. Later, more surface-independent optical mice use an optoelectronic sensor (essentially, a tiny low-resolution video camera) to take successive images of the surface on which the mouse operates. Battery powered, wireless optical mice flash the LED intermittently to save power, and only glow steadily when movement is detected.",
"title": "Types"
},
{
"paragraph_id": 51,
"text": "Often called \"air mice\" since they do not require a surface to operate, inertial mice use a tuning fork or other accelerometer (US Patent 4787051) to detect rotary movement for every axis supported. The most common models (manufactured by Logitech and Gyration) work using 2 degrees of rotational freedom and are insensitive to spatial translation. The user requires only small wrist rotations to move the cursor, reducing user fatigue or \"gorilla arm\".",
"title": "Types"
},
{
"paragraph_id": 52,
"text": "Usually cordless, they often have a switch to deactivate the movement circuitry between use, allowing the user freedom of movement without affecting the cursor position. A patent for an inertial mouse claims that such mice consume less power than optically based mice, and offer increased sensitivity, reduced weight and increased ease-of-use. In combination with a wireless keyboard an inertial mouse can offer alternative ergonomic arrangements which do not require a flat work surface, potentially alleviating some types of repetitive motion injuries related to workstation posture.",
"title": "Types"
},
{
"paragraph_id": 53,
"text": "A 3D mouse is a computer input device for viewport interaction with at least three degrees of freedom (DoF), e.g. in 3D computer graphics software for manipulating virtual objects, navigating in the viewport, defining camera paths, posing, and desktop motion capture. 3D mice can also be used as spatial controllers for video game interaction, e.g. SpaceOrb 360. To perform such different tasks the used transfer function and the device stiffness are essential for efficient interaction.",
"title": "Types"
},
{
"paragraph_id": 54,
"text": "The virtual motion is connected to the 3D mouse control handle via a transfer function. Position control means that the virtual position and orientation is proportional to the mouse handle's deflection whereas velocity control means that translation and rotation velocity of the controlled object is proportional to the handle deflection. A further essential property of a transfer function is its interaction metaphor:",
"title": "Types"
},
{
"paragraph_id": 55,
"text": "Ware and Osborne performed an experiment investigating these metaphors whereby it was shown that there is no single best metaphor. For manipulation tasks, the object-in-hand metaphor was superior, whereas for navigation tasks the camera-in-hand metaphor was superior.",
"title": "Types"
},
{
"paragraph_id": 56,
"text": "Zhai used and the following three categories for device stiffness:",
"title": "Types"
},
{
"paragraph_id": 57,
"text": "Logitech 3D Mouse (1990) was the first ultrasonic mouse and is an example of an isotonic 3D mouse having six degrees of freedom (6DoF). Isotonic devices have also been developed with less than 6DoF, e.g. the Inspector at Technical University of Denmark (5DoF input).",
"title": "Types"
},
{
"paragraph_id": 58,
"text": "Other examples of isotonic 3D mice are motion controllers, i.e. is a type of game controller that typically uses accelerometers to track motion. Motion tracking systems are also used for motion capture e.g. in the film industry, although that these tracking systems are not 3D mice in a strict sense, because motion capture only means recording 3D motion and not 3D interaction.",
"title": "Types"
},
{
"paragraph_id": 59,
"text": "Early 3D mice for velocity control were almost ideally isometric, e.g. SpaceBall 1003, 2003, 3003, and a device developed at Deutsches Zentrum für Luft und Raumfahrt (DLR), cf. US patent US4589810A.",
"title": "Types"
},
{
"paragraph_id": 60,
"text": "At DLR an elastic 6DoF sensor was developed that was used in Logitech's SpaceMouse and in the products of 3DConnexion. SpaceBall 4000 FLX has a maximum deflection of approximately 3 mm (0.12 in) at a maximum force of approximately 10N, that is, a stiffness of approximately 33 N/cm (19 lbf/in). SpaceMouse has a maximum deflection of 1.5 mm (0.059 in) at a maximum force of 4.4 N (0.99 lbf), that is, a stiffness of approximately 30 N/cm (17 lbf/in). Taking this development further, the softly elastic Sundinlabs SpaceCat was developed. SpaceCat has a maximum translational deflection of approximately 15 mm (0.59 in) and maximum rotational deflection of approximately 30° at a maximum force less than 2N, that is, a stiffness of approximately 1.3 N/cm (0.74 lbf/in). With SpaceCat Sundin and Fjeld reviewed five comparative experiments performed with different device stiffness and transfer functions and performed a further study comparing 6DoF softly elastic position control with 6DoF stiffly elastic velocity control in a positioning task. They concluded that for positioning tasks position control is to be preferred over velocity control. They could further conjecture the following two types of preferred 3D mouse usage:",
"title": "Types"
},
{
"paragraph_id": 61,
"text": "3DConnexion's 3D mice have been commercially successful over decades. They are used in combination with the conventional mouse for CAD. The Space Mouse is used to orient the target object or change the viewpoint with the non-dominant hand, whereas the dominant hand operates the computer mouse for conventional CAD GUI operation. This is a kind of space-multiplexed input where the 6 DoF input device acts as a graspable user interface that is always connected to the view port.",
"title": "Types"
},
{
"paragraph_id": 62,
"text": "In November 2010 a German Company called Axsotic introduced a new concept of 3D mouse called 3D Spheric Mouse. This new concept of a true six degree-of-freedom input device uses a ball to rotate isometrically in 3 axes and an elastic polymer anchored tetrahedron inspired suspension for translating the ball without any limitations. A contactless sensor design uses a magnetic sensor array for sensing three aches translation and two optical mouse sensors for three aches rotation. The special tetrahedron suspension allows a user to rotate the ball with the fingers while input translations with the hand-wrist motion.",
"title": "Types"
},
{
"paragraph_id": 63,
"text": "With force feedback the device stiffness can dynamically be adapted to the task just performed by the user, e.g. performing positioning tasks with less stiffness than navigation tasks.",
"title": "Types"
},
{
"paragraph_id": 64,
"text": "In 2000, Logitech introduced a \"tactile mouse\" known as the \"iFeel Mouse\" developed by Immersion Corporation that contained a small actuator to enable the mouse to generate simulated physical sensations. Such a mouse can augment user-interfaces with haptic feedback, such as giving feedback when crossing a window boundary. To surf the internet by touch-enabled mouse was first developed in 1996 and first implemented commercially by the Wingman Force Feedback Mouse. It requires the user to be able to feel depth or hardness; this ability was realized with the first electrorheological tactile mice but never marketed.",
"title": "Types"
},
{
"paragraph_id": 65,
"text": "Tablet digitizers are sometimes used with accessories called pucks, devices which rely on absolute positioning, but can be configured for sufficiently mouse-like relative tracking that they are sometimes marketed as mice.",
"title": "Types"
},
{
"paragraph_id": 66,
"text": "As the name suggests, this type of mouse is intended to provide optimum comfort and avoid injuries such as carpal tunnel syndrome, arthritis, and other repetitive strain injuries. It is designed to fit natural hand position and movements, to reduce discomfort.",
"title": "Types"
},
{
"paragraph_id": 67,
"text": "When holding a typical mouse, the ulna and radius bones on the arm are crossed. Some designs attempt to place the palm more vertically, so the bones take more natural parallel position.",
"title": "Types"
},
{
"paragraph_id": 68,
"text": "Increasing mouse height and angling the mouse topcase can improve wrist posture without negatively affecting performance. Some limit wrist movement, encouraging arm movement instead, that may be less precise but more optimal from the health point of view. A mouse may be angled from the thumb downward to the opposite side – this is known to reduce wrist pronation. However such optimizations make the mouse right or left hand specific, making more problematic to change the tired hand. Time has criticized manufacturers for offering few or no left-handed ergonomic mice: \"Oftentimes I felt like I was dealing with someone who'd never actually met a left-handed person before.\"",
"title": "Types"
},
{
"paragraph_id": 69,
"text": "Another solution is a pointing bar device. The so-called roller bar mouse is positioned snugly in front of the keyboard, thus allowing bi-manual accessibility.",
"title": "Types"
},
{
"paragraph_id": 70,
"text": "These mice are specifically designed for use in computer games. They typically employ a wider array of controls and buttons and have designs that differ radically from traditional mice. They may also have decorative monochrome or programmable RGB LED lighting. The additional buttons can often be used for changing the sensitivity of the mouse or they can be assigned (programmed) to macros (i.e., for opening a program or for use instead of a key combination). It is also common for game mice, especially those designed for use in real-time strategy games such as StarCraft, or in multiplayer online battle arena games such as League of Legends to have a relatively high sensitivity, measured in dots per inch (DPI), which can be as high as 25,600. DPI and CPI are the same values that refer to the mouse's sensitivity. DPI is a misnomer used in the gaming world, and many manufacturers use it to refer to CPI, counts per inch. Some advanced mice from gaming manufacturers also allow users to adjust the weight of the mouse by adding or subtracting weights to allow for easier control. Ergonomic quality is also an important factor in gaming mouse, as extended gameplay times may render further use of the mouse to be uncomfortable. Some mice have been designed to have adjustable features such as removable and/or elongated palm rests, horizontally adjustable thumb rests and pinky rests. Some mice may include several different rests with their products to ensure comfort for a wider range of target consumers. Gaming mice are held by gamers in three styles of grip:",
"title": "Types"
},
{
"paragraph_id": 71,
"text": "To transmit their input, typical cabled mice use a thin electrical cord terminating in a standard connector, such as RS-232C, PS/2, ADB, or USB. Cordless mice instead transmit data via infrared radiation (see IrDA) or radio (including Bluetooth), although many such cordless interfaces are themselves connected through the aforementioned wired serial buses.",
"title": "Connectivity and communication protocols"
},
{
"paragraph_id": 72,
"text": "While the electrical interface and the format of the data transmitted by commonly available mice is currently standardized on USB, in the past it varied between different manufacturers. A bus mouse used a dedicated interface card for connection to an IBM PC or compatible computer.",
"title": "Connectivity and communication protocols"
},
{
"paragraph_id": 73,
"text": "Mouse use in DOS applications became more common after the introduction of the Microsoft Mouse, largely because Microsoft provided an open standard for communication between applications and mouse driver software. Thus, any application written to use the Microsoft standard could use a mouse with a driver that implements the same API, even if the mouse hardware itself was incompatible with Microsoft's. This driver provides the state of the buttons and the distance the mouse has moved in units that its documentation calls \"mickeys\".",
"title": "Connectivity and communication protocols"
},
{
"paragraph_id": 74,
"text": "In the 1970s, the Xerox Alto mouse, and in the 1980s the Xerox optical mouse, used a quadrature-encoded X and Y interface. This two-bit encoding per dimension had the property that only one bit of the two would change at a time, like a Gray code or Johnson counter, so that the transitions would not be misinterpreted when asynchronously sampled.",
"title": "Connectivity and communication protocols"
},
{
"paragraph_id": 75,
"text": "The earliest mass-market mice, such as on the original Macintosh, Amiga, and Atari ST mice used a D-subminiature 9-pin connector to send the quadrature-encoded X and Y axis signals directly, plus one pin per mouse button. The mouse was a simple optomechanical device, and the decoding circuitry was all in the main computer.",
"title": "Connectivity and communication protocols"
},
{
"paragraph_id": 76,
"text": "The DE-9 connectors were designed to be electrically compatible with the joysticks popular on numerous 8-bit systems, such as the Commodore 64 and the Atari 2600. Although the ports could be used for both purposes, the signals must be interpreted differently. As a result, plugging a mouse into a joystick port causes the \"joystick\" to continuously move in some direction, even if the mouse stays still, whereas plugging a joystick into a mouse port causes the \"mouse\" to only be able to move a single pixel in each direction.",
"title": "Connectivity and communication protocols"
},
{
"paragraph_id": 77,
"text": "Because the IBM PC did not have a quadrature decoder built in, early PC mice used the RS-232C serial port to communicate encoded mouse movements, as well as provide power to the mouse's circuits. The Mouse Systems Corporation version used a five-byte protocol and supported three buttons. The Microsoft version used a three-byte protocol and supported two buttons. Due to the incompatibility between the two protocols, some manufacturers sold serial mice with a mode switch: \"PC\" for MSC mode, \"MS\" for Microsoft mode.",
"title": "Connectivity and communication protocols"
},
{
"paragraph_id": 78,
"text": "In 1986 Apple first implemented the Apple Desktop Bus allowing the daisy chaining of up to 16 devices, including mice and other devices on the same bus with no configuration whatsoever. Featuring only a single data pin, the bus used a purely polled approach to device communications and survived as the standard on mainstream models (including a number of non-Apple workstations) until 1998 when Apple's iMac line of computers joined the industry-wide switch to using USB. Beginning with the Bronze Keyboard PowerBook G3 in May 1999, Apple dropped the external ADB port in favor of USB, but retained an internal ADB connection in the PowerBook G4 for communication with its built-in keyboard and trackpad until early 2005.",
"title": "Connectivity and communication protocols"
},
{
"paragraph_id": 79,
"text": "With the arrival of the IBM PS/2 personal-computer series in 1987, IBM introduced the eponymous PS/2 port for mice and keyboards, which other manufacturers rapidly adopted. The most visible change was the use of a round 6-pin mini-DIN, in lieu of the former 5-pin MIDI style full sized DIN 41524 connector. In default mode (called stream mode) a PS/2 mouse communicates motion, and the state of each button, by means of 3-byte packets. For any motion, button press or button release event, a PS/2 mouse sends, over a bi-directional serial port, a sequence of three bytes, with the following format:",
"title": "Connectivity and communication protocols"
},
{
"paragraph_id": 80,
"text": "Here, XS and YS represent the sign bits of the movement vectors, XV and YV indicate an overflow in the respective vector component, and LB, MB and RB indicate the status of the left, middle and right mouse buttons (1 = pressed). PS/2 mice also understand several commands for reset and self-test, switching between different operating modes, and changing the resolution of the reported motion vectors.",
"title": "Connectivity and communication protocols"
},
{
"paragraph_id": 81,
"text": "A Microsoft IntelliMouse relies on an extension of the PS/2 protocol: the ImPS/2 or IMPS/2 protocol (the abbreviation combines the concepts of \"IntelliMouse\" and \"PS/2\"). It initially operates in standard PS/2 format, for backward compatibility. After the host sends a special command sequence, it switches to an extended format in which a fourth byte carries information about wheel movements. The IntelliMouse Explorer works analogously, with the difference that its 4-byte packets also allow for two additional buttons (for a total of five).",
"title": "Connectivity and communication protocols"
},
{
"paragraph_id": 82,
"text": "Mouse vendors also use other extended formats, often without providing public documentation. The Typhoon mouse uses 6-byte packets which can appear as a sequence of two standard 3-byte packets, such that an ordinary PS/2 driver can handle them. For 3-D (or 6-degree-of-freedom) input, vendors have made many extensions both to the hardware and to software. In the late 1990s, Logitech created ultrasound based tracking which gave 3D input to a few millimeters accuracy, which worked well as an input device but failed as a profitable product. In 2008, Motion4U introduced its \"OptiBurst\" system using IR tracking for use as a Maya (graphics software) plugin.",
"title": "Connectivity and communication protocols"
},
{
"paragraph_id": 83,
"text": "Almost all wired mice today use USB and the USB human interface device class for communication.",
"title": "Connectivity and communication protocols"
},
{
"paragraph_id": 84,
"text": "Cordless or wireless mice transmit data via radio. Some mice connect to the computer through Bluetooth or Wi-Fi, while others use a receiver that plugs into the computer, for example through a USB port.",
"title": "Connectivity and communication protocols"
},
{
"paragraph_id": 85,
"text": "Many mice that use a USB receiver have a storage compartment for it inside the mouse. Some \"nano receivers\" are designed to be small enough to remain plugged into a laptop during transport, while still being large enough to easily remove.",
"title": "Connectivity and communication protocols"
},
{
"paragraph_id": 86,
"text": "MS-DOS and Windows 1.0 support connecting a mouse such as a Microsoft Mouse via multiple interfaces: BallPoint, Bus (InPort), Serial port or PS/2.",
"title": "Operating system support"
},
{
"paragraph_id": 87,
"text": "Windows 98 added built-in support for USB Human Interface Device class (USB HID), with native vertical scrolling support. Windows 2000 and Windows Me expanded this built-in support to 5-button mice.",
"title": "Operating system support"
},
{
"paragraph_id": 88,
"text": "Windows XP Service Pack 2 introduced a Bluetooth stack, allowing Bluetooth mice to be used without any USB receivers. Windows Vista added native support for horizontal scrolling and standardized wheel movement granularity for finer scrolling.",
"title": "Operating system support"
},
{
"paragraph_id": 89,
"text": "Windows 8 introduced BLE (Bluetooth Low Energy) mouse/HID support.",
"title": "Operating system support"
},
{
"paragraph_id": 90,
"text": "Some systems allow two or more mice to be used at once as input devices. Late-1980s era home computers such as the Amiga used this to allow computer games with two players interacting on the same computer (Lemmings and The Settlers for example). The same idea is sometimes used in collaborative software, e.g. to simulate a whiteboard that multiple users can draw on without passing a single mouse around.",
"title": "Multiple-mouse systems"
},
{
"paragraph_id": 91,
"text": "Microsoft Windows, since Windows 98, has supported multiple simultaneous pointing devices. Because Windows only provides a single screen cursor, using more than one device at the same time requires cooperation of users or applications designed for multiple input devices.",
"title": "Multiple-mouse systems"
},
{
"paragraph_id": 92,
"text": "Multiple mice are often used in multi-user gaming in addition to specially designed devices that provide several input interfaces.",
"title": "Multiple-mouse systems"
},
{
"paragraph_id": 93,
"text": "Windows also has full support for multiple input/mouse configurations for multi-user environments.",
"title": "Multiple-mouse systems"
},
{
"paragraph_id": 94,
"text": "Starting with Windows XP, Microsoft introduced an SDK for developing applications that allow multiple input devices to be used at the same time with independent cursors and independent input points. However, it no longer appears to be available.",
"title": "Multiple-mouse systems"
},
{
"paragraph_id": 95,
"text": "The introduction of Windows Vista and Microsoft Surface (now known as Microsoft PixelSense) introduced a new set of input APIs that were adopted into Windows 7, allowing for 50 points/cursors, all controlled by independent users. The new input points provide traditional mouse input; however, they were designed with other input technologies like touch and image in mind. They inherently offer 3D coordinates along with pressure, size, tilt, angle, mask, and even an image bitmap to see and recognize the input point/object on the screen.",
"title": "Multiple-mouse systems"
},
{
"paragraph_id": 96,
"text": "As of 2009, Linux distributions and other operating systems that use X.Org, such as OpenSolaris and FreeBSD, support 255 cursors/input points through Multi-Pointer X. However, currently no window managers support Multi-Pointer X leaving it relegated to custom software usage.",
"title": "Multiple-mouse systems"
},
{
"paragraph_id": 97,
"text": "There have also been propositions of having a single operator use two mice simultaneously as a more sophisticated means of controlling various graphics and multimedia applications.",
"title": "Multiple-mouse systems"
},
{
"paragraph_id": 98,
"text": "Mouse buttons are microswitches which can be pressed to select or interact with an element of a graphical user interface, producing a distinctive clicking sound.",
"title": "Buttons"
},
{
"paragraph_id": 99,
"text": "Since around the late 1990s, the three-button scrollmouse has become the de facto standard. Users most commonly employ the second button to invoke a contextual menu in the computer's software user interface, which contains options specifically tailored to the interface element over which the mouse cursor currently sits. By default, the primary mouse button sits located on the left-hand side of the mouse, for the benefit of right-handed users; left-handed users can usually reverse this configuration via software.",
"title": "Buttons"
},
{
"paragraph_id": 100,
"text": "Nearly all mice now have an integrated input primarily intended for scrolling on top, usually a single-axis digital wheel or rocker switch which can also be depressed to act as a third button. Though less common, many mice instead have two-axis inputs such as a tiltable wheel, trackball, or touchpad. Those with a trackball may be designed to stay stationary, using the trackball instead of moving the mouse.",
"title": "Scrolling"
},
{
"paragraph_id": 101,
"text": "",
"title": "Scrolling"
},
{
"paragraph_id": 102,
"text": "Mickeys per second is a unit of measurement for the speed and movement direction of a computer mouse, where direction is often expressed as \"horizontal\" versus \"vertical\" mickey count. However, speed can also refer to the ratio between how many pixels the cursor moves on the screen and how far the mouse moves on the mouse pad, which may be expressed as pixels per mickey, pixels per inch, or pixels per centimeter.",
"title": "Speed"
},
{
"paragraph_id": 103,
"text": "The computer industry often measures mouse sensitivity in terms of counts per inch (CPI), commonly expressed as dots per inch (DPI) – the number of steps the mouse will report when it moves one inch. In early mice, this specification was called pulses per inch (ppi). The mickey originally referred to one of these counts, or one resolvable step of motion. If the default mouse-tracking condition involves moving the cursor by one screen-pixel or dot on-screen per reported step, then the CPI does equate to DPI: dots of cursor motion per inch of mouse motion. The CPI or DPI as reported by manufacturers depends on how they make the mouse; the higher the CPI, the faster the cursor moves with mouse movement. However, software can adjust the mouse sensitivity, making the cursor move faster or slower than its CPI. As of 2007, software can change the speed of the cursor dynamically, taking into account the mouse's absolute speed and the movement from the last stop-point. In most software, an example being the Windows platforms, this setting is named \"speed\", referring to \"cursor precision\". However, some operating systems name this setting \"acceleration\", the typical Apple OS designation. This term is incorrect. Mouse acceleration in most mouse software refers to the change in speed of the cursor over time while the mouse movement is constant.",
"title": "Speed"
},
{
"paragraph_id": 104,
"text": "For simple software, when the mouse starts to move, the software will count the number of \"counts\" or \"mickeys\" received from the mouse and will move the cursor across the screen by that number of pixels (or multiplied by a rate factor, typically less than 1). The cursor will move slowly on the screen, with good precision. When the movement of the mouse passes the value set for some threshold, the software will start to move the cursor faster, with a greater rate factor. Usually, the user can set the value of the second rate factor by changing the \"acceleration\" setting.",
"title": "Speed"
},
{
"paragraph_id": 105,
"text": "Operating systems sometimes apply acceleration, referred to as \"ballistics\", to the motion reported by the mouse. For example, versions of Windows prior to Windows XP doubled reported values above a configurable threshold, and then optionally doubled them again above a second configurable threshold. These doublings applied separately in the X and Y directions, resulting in very nonlinear response.",
"title": "Speed"
},
{
"paragraph_id": 106,
"text": "Engelbart's original mouse did not require a mousepad; the mouse had two large wheels which could roll on virtually any surface. However, most subsequent mechanical mice starting with the steel roller ball mouse have required a mousepad for optimal performance.",
"title": "Mousepads"
},
{
"paragraph_id": 107,
"text": "The mousepad, the most common mouse accessory, appears most commonly in conjunction with mechanical mice, because to roll smoothly the ball requires more friction than common desk surfaces usually provide. So-called \"hard mousepads\" for gamers or optical/laser mice also exist.",
"title": "Mousepads"
},
{
"paragraph_id": 108,
"text": "Most optical and laser mice do not require a pad, the notable exception being early optical mice which relied on a grid on the pad to detect movement (e.g. Mouse Systems). Whether to use a hard or soft mousepad with an optical mouse is largely a matter of personal preference. One exception occurs when the desk surface creates problems for the optical or laser tracking, for example, a transparent or reflective surface, such as glass.",
"title": "Mousepads"
},
{
"paragraph_id": 109,
"text": "Some mice also come with small \"pads\" attached to the bottom surface, also called mouse feet or mouse skates, that help the user slide the mouse smoothly across surfaces.",
"title": "Mousepads"
},
{
"paragraph_id": 110,
"text": "Around 1981, Xerox included mice with its Xerox Star, based on the mouse used in the 1970s on the Alto computer at Xerox PARC. Sun Microsystems, Symbolics, Lisp Machines Inc., and Tektronix also shipped workstations with mice, starting in about 1981. Later, inspired by the Star, Apple Computer released the Apple Lisa, which also used a mouse. However, none of these products achieved large-scale success. Only with the release of the Apple Macintosh in 1984 did the mouse see widespread use.",
"title": "In the marketplace"
},
{
"paragraph_id": 111,
"text": "The Macintosh design, commercially successful and technically influential, led many other vendors to begin producing mice or including them with their other computer products (by 1986, Atari ST, Amiga, Windows 1.0, GEOS for the Commodore 64, and the Apple IIGS).",
"title": "In the marketplace"
},
{
"paragraph_id": 112,
"text": "The widespread adoption of graphical user interfaces in the software of the 1980s and 1990s made mice all but indispensable for controlling computers. In November 2008, Logitech built their billionth mouse.",
"title": "In the marketplace"
},
{
"paragraph_id": 113,
"text": "The device often functions as an interface for PC-based computer games and sometimes for video game consoles. The Classic Mac OS Desk Accessory Puzzle in 1984 was the first game designed specifically for a mouse.",
"title": "Use in games"
},
{
"paragraph_id": 114,
"text": "FPSs naturally lend themselves to separate and simultaneous control of the player's movement and aim, and on computers this has traditionally been achieved with a combination of keyboard and mouse. Players use the X-axis of the mouse for looking (or turning) left and right, and the Y-axis for looking up and down; the keyboard is used for movement and supplemental inputs.",
"title": "Use in games"
},
{
"paragraph_id": 115,
"text": "Many shooting genre players prefer a mouse over a gamepad analog stick because the wide range of motion offered by a mouse allows for faster and more varied control. Although an analog stick allows the player more granular control, it is poor for certain movements, as the player's input is relayed based on a vector of both the stick's direction and magnitude. Thus, a small but fast movement (known as \"flick-shotting\") using a gamepad requires the player to quickly move the stick from its rest position to the edge and back again in quick succession, a difficult maneuver. In addition the stick also has a finite magnitude; if the player is currently using the stick to move at a non-zero velocity their ability to increase the rate of movement of the camera is further limited based on the position their displaced stick was already at before executing the maneuver. The effect of this is that a mouse is well suited not only to small, precise movements but also to large, quick movements and immediate, responsive movements; all of which are important in shooter gaming. This advantage also extends in varying degrees to similar game styles such as third-person shooters.",
"title": "Use in games"
},
{
"paragraph_id": 116,
"text": "Some incorrectly ported games or game engines have acceleration and interpolation curves which unintentionally produce excessive, irregular, or even negative acceleration when used with a mouse instead of their native platform's non-mouse default input device. Depending on how deeply hardcoded this misbehavior is, internal user patches or external 3rd-party software may be able to fix it. Individual game engines will also have their own sensitivities. This often restricts one from taking a game's existing sensitivity, transferring it to another, and acquiring the same 360 rotational measurements. A sensitivity converter is required in order to translate rotational movements properly.",
"title": "Use in games"
},
{
"paragraph_id": 117,
"text": "Due to their similarity to the WIMP desktop metaphor interface for which mice were originally designed, and to their own tabletop game origins, computer strategy games are most commonly played with mice. In particular, real-time strategy and MOBA games usually require the use of a mouse.",
"title": "Use in games"
},
{
"paragraph_id": 118,
"text": "The left button usually controls primary fire. If the game supports multiple fire modes, the right button often provides secondary fire from the selected weapon. Games with only a single fire mode will generally map secondary fire to aim down the weapon sights. In some games, the right button may also invoke accessories for a particular weapon, such as allowing access to the scope of a sniper rifle or allowing the mounting of a bayonet or silencer.",
"title": "Use in games"
},
{
"paragraph_id": 119,
"text": "Players can use a scroll wheel for changing weapons (or for controlling scope-zoom magnification, in older games). On most first person shooter games, programming may also assign more functions to additional buttons on mice with more than three controls. A keyboard usually controls movement (for example, WASD for moving forward, left, backward, and right, respectively) and other functions such as changing posture. Since the mouse serves for aiming, a mouse that tracks movement accurately and with less lag (latency) will give a player an advantage over players with less accurate or slower mice. In some cases the right mouse button may be used to move the player forward, either in lieu of, or in conjunction with the typical WASD configuration.",
"title": "Use in games"
},
{
"paragraph_id": 120,
"text": "Many games provide players with the option of mapping their own choice of a key or button to a certain control. An early technique of players, circle strafing, saw a player continuously strafing while aiming and shooting at an opponent by walking in circle around the opponent with the opponent at the center of the circle. Players could achieve this by holding down a key for strafing while continuously aiming the mouse toward the opponent.",
"title": "Use in games"
},
{
"paragraph_id": 121,
"text": "Games using mice for input are so popular that many manufacturers make mice specifically for gaming. Such mice may feature adjustable weights, high-resolution optical or laser components, additional buttons, ergonomic shape, and other features such as adjustable CPI. Mouse Bungees are typically used with gaming mice because it eliminates the annoyance of the cable.",
"title": "Use in games"
},
{
"paragraph_id": 122,
"text": "Many games, such as first- or third-person shooters, have a setting named \"invert mouse\" or similar (not to be confused with \"button inversion\", sometimes performed by left-handed users) which allows the user to look downward by moving the mouse forward and upward by moving the mouse backward (the opposite of non-inverted movement). This control system resembles that of aircraft control sticks, where pulling back causes pitch up and pushing forward causes pitch down; computer joysticks also typically emulate this control-configuration.",
"title": "Use in games"
},
{
"paragraph_id": 123,
"text": "After id Software's commercial hit of Doom, which did not support vertical aiming, competitor Bungie's Marathon became the first first-person shooter to support using the mouse to aim up and down. Games using the Build engine had an option to invert the Y-axis. The \"invert\" feature actually made the mouse behave in a manner that users now regard as non-inverted (by default, moving mouse forward resulted in looking down). Soon after, id Software released Quake, which introduced the invert feature as users now know it.",
"title": "Use in games"
},
{
"paragraph_id": 124,
"text": "In 1988, the VTech Socrates educational video game console featured a wireless mouse with an attached mouse pad as an optional controller used for some games. In the early 1990s, the Super Nintendo Entertainment System video game system featured a mouse in addition to its controllers. A mouse was also released for the Nintendo 64, although it was only released in Japan. The 1992 game Mario Paint in particular used the mouse's capabilities, as did its Japanese-only successor Mario Artist on the N64 for its 64DD disk drive peripheral in 1999. Sega released official mice for their Genesis/Mega Drive, Saturn and Dreamcast consoles. NEC sold official mice for its PC Engine and PC-FX consoles. Sony released an official mouse product for the PlayStation console, included one along with the Linux for PlayStation 2 kit, as well as allowing owners to use virtually any USB mouse with the PS2, PS3, and PS4. Nintendo's Wii also had this feature implemented in a later software update, and this support was retained on its successor, the Wii U. Microsoft's Xbox line of game consoles (which used operaring systems based on modified versions of Windows NT) also had universal-wide mouse support using USB.",
"title": "Use in games"
}
] | A computer mouse is a hand-held pointing device that detects two-dimensional motion relative to a surface. This motion is typically translated into the motion of the pointer on a display, which allows a smooth control of the graphical user interface of a computer. The first public demonstration of a mouse controlling a computer system was done by Doug Engelbart in 1968 as part of the Mother of All Demos. Mice originally used two separate wheels to directly track movement across a surface: one in the x-dimension and one in the Y. Later, the standard design shifted to use a ball rolling on a surface to detect motion, in turn connected to internal rollers. Most modern mice use optical movement detection with no moving parts. Though originally all mice were connected to a computer by a cable, many modern mice are cordless, relying on short-range radio communication with the connected system. In addition to moving a cursor, computer mice have one or more buttons to allow operations such as the selection of a menu item on a display. Mice often also feature other elements, such as touch surfaces and scroll wheels, which enable additional control and dimensional input. | 2001-11-09T16:36:46Z | 2023-12-23T14:27:31Z | [
"Template:Sp",
"Template:Currency",
"Template:Further",
"Template:Infobox",
"Template:Cleanup section",
"Template:Div col end",
"Template:Ill",
"Template:More citations needed section",
"Template:Cite patent",
"Template:Authority control",
"Template:Convert",
"Template:Cvt",
"Template:Spaced en dash",
"Template:Portal",
"Template:Cite web",
"Template:Citation",
"Template:Cite magazine",
"Template:Webarchive",
"Template:Redirect",
"Template:Anchor",
"Template:Lang",
"Template:Clarify",
"Template:Reflist",
"Template:Cite book",
"Template:ISBN",
"Template:Game controllers",
"Template:Short description",
"Template:Pp-move",
"Template:Cite journal",
"Template:Wikiversity",
"Template:Basic computer components",
"Template:About",
"Template:Relevance inline",
"Template:Citation needed",
"Template:Cite news",
"Template:Circa",
"Template:As of",
"Template:Div col",
"Template:Cite conference",
"Template:Main",
"Template:US patent",
"Template:Commons category",
"Template:Use dmy dates"
] | https://en.wikipedia.org/wiki/Computer_mouse |
7,059 | Civil defense | Civil defense (British English: civil defence) or civil protection is an effort to protect the citizens of a state (generally non-combatants) from human-made and natural disasters. It uses the principles of emergency operations: prevention, mitigation, preparation, response, or emergency evacuation and recovery. Programs of this sort were initially discussed at least as early as the 1920s and were implemented in some countries during the 1930s as the threat of war and aerial bombardment grew. Civil-defense structures became widespread after authorities recognised the threats posed by nuclear weapons.
Since the end of the Cold War, the focus of civil defense has largely shifted from responding to military attack to dealing with emergencies and disasters in general. The new concept is characterised by a number of terms, each of which has its own specific shade of meaning, such as crisis management, emergency management, emergency preparedness, contingency planning, civil contingency, civil aid and civil protection.
Some countries treat civil defense as a key part of defense in general. For example, the Swedish-language word totalförsvar ("total defense") refers to the commitment of a wide range of national resources to defense, including the protection of all aspects of civilian life. Some countries have organized civil defense along paramilitary lines, or have incorporated it within armed forces, such as the Soviet Civil Defense Forces (Войска гражданской обороны).
The advent of civil defense was stimulated by the experience of the bombing of civilian areas during the First World War. The bombing of the United Kingdom began on 19 January 1915 when German zeppelins dropped bombs on the Great Yarmouth area, killing six people. German bombing operations of the First World War were surprisingly effective, especially after the Gotha bombers surpassed the zeppelins. The most devastating raids inflicted 121 casualties for each ton of bombs dropped; this figure was then used as a basis for predictions.
After the war, attention was turned toward civil defense in the event of war, and the Air Raid Precautions Committee (ARP) was established in 1924 to investigate ways for ensuring the protection of civilians from the danger of air-raids.
The Committee produced figures estimating that in London there would be 9,000 casualties in the first two days and then a continuing rate of 17,500 casualties a week. These rates were thought conservative. It was believed that there would be "total chaos and panic" and hysterical neurosis as the people of London would try to flee the city. To control the population harsh measures were proposed: bringing London under almost military control, and physically cordoning off the city with 120,000 troops to force people back to work. A different government department proposed setting up camps for refugees for a few days before sending them back to London.
A special government department, the Civil Defence Service, was established by the Home Office in 1935. Its remit included the pre-existing ARP as well as wardens, firemen (initially the Auxiliary Fire Service (AFS) and latterly the National Fire Service (NFS)), fire watchers, rescue, first aid post, stretcher party and industry. Over 1.9 million people served within the CD; nearly 2,400 died from enemy action.
The organization of civil defense was the responsibility of the local authority. Volunteers were ascribed to different units depending on experience or training. Each local civil defense service was divided into several sections. Wardens were responsible for local reconnaissance and reporting, and leadership, organization, guidance and control of the general public. Wardens would also advise survivors of the locations of rest and food centers, and other welfare facilities.
Rescue Parties were required to assess and then access bombed-out buildings and retrieve injured or dead people. In addition they would turn off gas, electricity and water supplies, and repair or pull down unsteady buildings. Medical services, including First Aid Parties, provided on the spot medical assistance.
The expected stream of information that would be generated during an attack was handled by 'Report and Control' teams. A local headquarters would have an ARP controller who would direct rescue, first aid and decontamination teams to the scenes of reported bombing. If local services were deemed insufficient to deal with the incident then the controller could request assistance from surrounding boroughs.
Fire Guards were responsible for a designated area/building and required to monitor the fall of incendiary bombs and pass on news of any fires that had broken out to the NFS. They could deal with an individual magnesium alloy ("Elektron") incendiary bomb by dousing it with buckets of sand or water or by smothering. Additionally, 'Gas Decontamination Teams' kitted out with gas-tight and waterproof protective clothing were to deal with any gas attacks. They were trained to decontaminate buildings, roads, rail and other material that had been contaminated by liquid or jelly gases.
Little progress was made over the issue of air-raid shelters, because of the apparently irreconcilable conflict between the need to send the public underground for shelter and the need to keep them above ground for protection against gas attacks. In February 1936 the Home Secretary appointed a technical Committee on Structural Precautions against Air Attack. During the Munich crisis, local authorities dug trenches to provide shelter. After the crisis, the British Government decided to make these a permanent feature, with a standard design of precast concrete trench lining. They also decided to issue the Anderson shelter free to poorer households and to provide steel props to create shelters in suitable basements.
During the Second World War, the ARP was responsible for the issuing of gas masks, pre-fabricated air-raid shelters (such as Anderson shelters, as well as Morrison shelters), the upkeep of local public shelters, and the maintenance of the blackout. The ARP also helped rescue people after air raids and other attacks, and some women became ARP Ambulance Attendants whose job was to help administer first aid to casualties, search for survivors, and in many grim instances, help recover bodies, sometimes those of their own colleagues.
As the war progressed, the military effectiveness of Germany's aerial bombardment was very limited. Thanks to the Luftwaffe's shifting aims, the strength of British air defenses, the use of early warning radar and the life-saving actions of local civil defense units, the aerial "Blitz" during the Battle of Britain failed to break the morale of the British people, destroy the Royal Air Force or significantly hinder British industrial production. Despite a significant investment in civil and military defense, British civilian losses during the Blitz were higher than in most strategic bombing campaigns throughout the war. For example, there were 14,000-20,000 UK civilian fatalities during the Battle of Britain, a relatively high number considering that the Luftwaffe dropped only an estimated 30,000 tons of ordinance during the battle. Granted, this resulting 0.47-0.67 civilian fatalities per ton of bombs dropped was lower than the earlier 121 casualties per ton prediction. However, in comparison, Allied strategic bombing of Germany during the war proved slightly less lethal than what was observed in the UK, with an estimated 400,000-600,000 German civilian fatalities for approximately 1.35 million tons of bombs dropped on Germany, an estimated resulting rate therefore of 0.30-0.44 civilian fatalities per ton of bombs dropped.
In the United States, the Office of Civilian Defense was established in May 1941 to coordinate civilian defense efforts. It coordinated with the Department of the Army and established similar groups to the British ARP. One of these groups that still exists today is the Civil Air Patrol, which was originally created as a civilian auxiliary to the Army. The CAP was created on December 1, 1941, with the main civil defense mission of search and rescue. The CAP also sank two Axis submarines and provided aerial reconnaissance for Allied and neutral merchant ships. In 1946, the Civil Air Patrol was barred from combat by Public Law 79-476. The CAP then received its current mission: search and rescue for downed aircraft. When the Air Force was created, in 1947, the Civil Air Patrol became the auxiliary of the Air Force.
The Coast Guard Auxiliary performs a similar role in support of the U.S. Coast Guard. Like the Civil Air Patrol, the Coast Guard Auxiliary was established in the run up to World War II. Auxiliarists were sometimes armed during the war, and extensively participated in port security operations. After the war, the Auxiliary shifted its focus to promoting boating safety and assisting the Coast Guard in performing search and rescue and marine safety and environmental protection.
In the United States a federal civil defense program existed under Public Law 920 of the 81st Congress, as amended, from 1951 to 1994. That statutory scheme was made so-called all-hazards by Public Law 103–160 in 1993 and largely repealed by Public Law 103–337 in 1994. Parts now appear in Title VI of the Robert T. Stafford Disaster Relief and Emergency Assistance Act, Public Law 100-107 [1988 as amended]. The term EMERGENCY PREPAREDNESS was largely codified by that repeal and amendment. See 42 USC Sections 5101 and following.
In most of the states of the North Atlantic Treaty Organization, such as the United States, the United Kingdom and West Germany, as well as the Soviet Bloc, and especially in the neutral countries, such as Switzerland and in Sweden during the 1950s and 1960s, many civil defense practices took place to prepare for the aftermath of a nuclear war, which seemed quite likely at that time.
In the United Kingdom, the Civil Defence Service was disbanded in 1945, followed by the ARP in 1946. With the onset of the growing tensions between East and West, the service was revived in 1949 as the Civil Defence Corps. As a civilian volunteer organization, it was tasked to take control in the aftermath of a major national emergency, principally envisaged as being a Cold War nuclear attack. Although under the authority of the Home Office, with a centralized administrative establishment, the corps was administered locally by Corps Authorities. In general every county was a Corps Authority, as were most county boroughs in England and Wales and large burghs in Scotland.
Each division was divided into several sections, including the Headquarters, Intelligence and Operations, Scientific and Reconnaissance, Warden & Rescue, Ambulance and First Aid and Welfare.
In 1954 Coventry City Council caused international controversy when it announced plans to disband its Civil Defence committee because the councillors had decided that hydrogen bombs meant that there could be no recovery from a nuclear attack. The British government opposed such a move and held a provocative Civil Defence exercise on the streets of Coventry which Labour council members protested against. The government also decided to implement its own committee at the city's cost until the council reinstituted its committee.
In the United States, the sheer power of nuclear weapons and the perceived likelihood of such an attack precipitated a greater response than had yet been required of civil defense. Civil defense, previously considered an important and commonsense step, became divisive and controversial in the charged atmosphere of the Cold War. In 1950, the National Security Resources Board created a 162-page document outlining a model civil defense structure for the U.S. Called the "Blue Book" by civil defense professionals in reference to its solid blue cover, it was the template for legislation and organization for the next 40 years.
Perhaps the most memorable aspect of the Cold War civil defense effort was the educational effort made or promoted by the government. In Duck and Cover, Bert the Turtle advocated that children "duck and cover" when they "see the flash." Booklets such as Survival Under Atomic Attack, Fallout Protection and Nuclear War Survival Skills were also commonplace. The transcribed radio program Stars for Defense combined hit music with civil defense advice. Government institutes created public service announcements including children's songs and distributed them to radio stations to educate the public in case of nuclear attack.
The US President Kennedy (1961–63) launched an ambitious effort to install fallout shelters throughout the United States. These shelters would not protect against the blast and heat effects of nuclear weapons, but would provide some protection against the radiation effects that would last for weeks and even affect areas distant from a nuclear explosion. In order for most of these preparations to be effective, there had to be some degree of warning. In 1951, CONELRAD (Control of Electromagnetic Radiation) was established. Under the system, a few primary stations would be alerted of an emergency and would broadcast an alert. All broadcast stations throughout the country would be constantly listening to an upstream station and repeat the message, thus passing it from station to station.
In a once classified US war game analysis, looking at varying levels of war escalation, warning and pre-emptive attacks in the late 1950s early 1960s, it was estimated that approximately 27 million US citizens would have been saved with civil defense education. At the time, however, the cost of a full-scale civil defense program was regarded as less effective in cost-benefit analysis than a ballistic missile defense (Nike Zeus) system, and as the Soviet adversary was increasing their nuclear stockpile, the efficacy of both would follow a diminishing returns trend.
Contrary to the largely noncommittal approach taken in NATO, with its stops and starts in civil defense depending on the whims of each newly elected government, the military strategy in the comparatively more ideologically consistent USSR held that, amongst other things, a winnable nuclear war was possible. To this effect the Soviets planned to minimize, as far as possible, the effects of nuclear weapon strikes on its territory, and therefore spent considerably more thought on civil defense preparations than in U.S., with defense plans that have been assessed to be far more effective than those in the U.S.
Soviet Civil Defense Troops played the main role in the massive disaster relief operation following the 1986 Chernobyl nuclear accident. Defense Troop reservists were officially mobilized (as in a case of war) from throughout the USSR to join the Chernobyl task force and formed on the basis of the Kyiv Civil Defense Brigade. The task force performed some high-risk tasks including, with the failure of their robotic machinery, the manual removal of highly-radioactive debris. Many of their personnel were later decorated with medals for their work at containing the release of radiation into the environment, with a number of the 56 deaths from the accident being Civil defense troops.
In Western countries, strong civil defense policies were never properly implemented, because it was fundamentally at odds with the doctrine of "mutual assured destruction" (MAD) by making provisions for survivors. It was also considered that a full-fledged total defense would have not been worth the very large expense. For whatever reason, the public saw efforts at civil defense as fundamentally ineffective against the powerful destructive forces of nuclear weapons, and therefore a waste of time and money, although detailed scientific research programs did underlie the much-mocked government civil defense pamphlets of the 1950s and 1960s.
The Civil Defence Corps was stood down in Great Britain in 1968 due to the financial crisis of the mid-1960s. Its neighbors, however, remained committed to Civil Defence, namely the Isle of Man Civil Defence Corps and Civil Defence Ireland (Republic of Ireland).
In the United States, the various civil defense agencies were replaced with the Federal Emergency Management Agency (FEMA) in 1979. In 2002 this became part of the Department of Homeland Security. The focus was shifted from nuclear war to an "all-hazards" approach of Comprehensive Emergency Management. Natural disasters and the emergence of new threats such as terrorism have caused attention to be focused away from traditional civil defense and into new forms of civil protection such as emergency management and homeland security.
Many countries maintain a national Civil Defence Corps, usually having a wide brief for assisting in large scale civil emergencies such as flood, earthquake, invasion, or civil disorder.
After the September 11 attacks in 2001, in the United States the concept of civil defense has been revisited under the umbrella term of homeland security and all-hazards emergency management.
In Europe, the triangle CD logo continues to be widely used. The old U.S. civil defense logo was used in the FEMA logo until 2006 and is hinted at in the United States Civil Air Patrol logo. Created in 1939 by Charles Coiner of the N. W. Ayer Advertising Agency, it was used throughout World War II and the Cold War era. In 2006, the National Emergency Management Association—a U.S. organization made up of state emergency managers—"officially" retired the Civil Defense triangle logo, replacing it with a stylised EM (standing for Emergency management). The name and logo, however, continue to be used by Hawaii State Civil Defense and Guam Homeland Security/Office of Civil Defense.
The term "civil protection" is currently widely used within the European Union to refer to government-approved systems and resources tasked with protecting the non-combat population, primarily in the event of natural and technological disasters. For example, the EU's humanitarian aid policy director on the Ebola Crisis, Florika Fink-Hooijer, said that civil protection requires "not just more resources, but first and foremost better governance of the resources that are available including better synergies between humanitarian aid and civil protection". In recent years there has been emphasis on preparedness for technological disasters resulting from terrorist attack. Within EU countries the term "crisis-management" emphasizes the political and security dimension rather than measures to satisfy the immediate needs of the population.
In Australia, civil defense is the responsibility of the volunteer-based State Emergency Service. The United Kingdom is seeing a resurgence of Civil Defence with the development of the Joint Civil Aid Corps, which is building on the heritage of both the Civil Defence Services of WW2 and the Civil Defence Corps of the Cold War period. However, the Joint Civil Aid Corps is structured and designed for modern society in the UK, and is probably the on Civil Defence organization that is a registered charity, and not funded through government means.
In most former Soviet countries civil defense is the responsibility of governmental ministries, such as Russia's Ministry of Emergency Situations.
Relatively small investments in preparation can speed up recovery by months or years and thereby prevent millions of deaths by hunger, cold and disease. According to human capital theory in economics, a country's population is more valuable than all of the land, factories and other assets that it possesses. People rebuild a country after its destruction, and it is therefore important for the economic security of a country that it protect its people. According to psychology, it is important for people to feel as though they are in control of their own destiny, and preparing for uncertainty via civil defense may help to achieve this.
In the United States, the federal civil defense program was authorized by statute and ran from 1951 to 1994. Originally authorized by Public Law 920 of the 81st Congress, it was repealed by Public Law 93–337 in 1994. Small portions of that statutory scheme were incorporated into the Robert T. Stafford Disaster Relief and Emergency Assistance Act (Public Law 100–707) which partly superseded in part, partly amended, and partly supplemented the Disaster Relief Act of 1974 (Public Law 93-288). In the portions of the civil defense statute incorporated into the Stafford Act, the primary modification was to use the term "Emergency Preparedness" wherever the term "Civil Defence" had previously appeared in the statutory language.
An important concept initiated by President Jimmy Carter was the so-called "Crisis Relocation Program" administered as part of the federal civil defense program. That effort largely lapsed under President Ronald Reagan, who discontinued the Carter initiative because of opposition from areas potentially hosting the relocated population.
Threats to civilians and civilian life include NBC (Nuclear, Biological, and Chemical warfare) and others, like the more modern term CBRN (Chemical Biological Radiological and Nuclear). Threat assessment involves studying each threat so that preventative measures can be built into civilian life.
Refers to conventional explosives. A blast shelter designed to protect only from radiation and fallout would be much more vulnerable to conventional explosives. See also fallout shelter.
Shelter intended to protect against nuclear blast effects would include thick concrete and other sturdy elements which are resistant to conventional explosives. The biggest threats from a nuclear attack are effects from the blast, fires and radiation. One of the most prepared countries for a nuclear attack is Switzerland. Almost every building in Switzerland has an abri (shelter) against the initial nuclear bomb and explosion followed by the fall-out. Because of this, many people use it as a safe to protect valuables, photos, financial information and so on. Switzerland also has air-raid and nuclear-raid sirens in every village.
A "radiologically enhanced weapon", or "dirty bomb", uses an explosive to spread radioactive material. This is a theoretical risk, and such weapons have not been used by terrorists. Depending on the quantity of the radioactive material, the dangers may be mainly psychological. Toxic effects can be managed by standard hazmat techniques.
The threat here is primarily from disease-causing microorganisms such as bacteria and viruses.
Various chemical agents are a threat, such as nerve gas (VX, Sarin, and so on.).
Mitigation is the process of actively preventing war or the release of nuclear weapons. It includes policy analysis, diplomacy, political measures, nuclear disarmament and more military responses such as a National Missile Defense and air defense artillery. In the case of counter-terrorism, mitigation would include diplomacy, intelligence gathering and direct action against terrorist groups. Mitigation may also be reflected in long-term planning such as the design of the interstate highway system and the placement of military bases further away from populated areas.
Preparation consists of building blast shelters and pre-positioning information, supplies, and emergency infrastructure. For example, most larger cities in the U.S. now have underground emergency operations centers that can perform civil defense coordination. FEMA also has many underground facilities for the same purpose located near major railheads such as the ones in Denton, Texas and Mount Weather, Virginia.
Other measures would include continual government inventories of grain silos, the Strategic National Stockpile, the uncapping of the Strategic Petroleum Reserve, the dispersal of lorry-transportable bridges, water purification, mobile refineries, mobile de-contamination facilities, mobile general and special purpose disaster mortuary facilities such as Disaster Mortuary Operational Response Team (DMORT) and DMORT-WMD, and other aids such as temporary housing to speed civil recovery.
On an individual scale, one means of preparation for exposure to nuclear fallout is to obtain potassium iodide (KI) tablets as a safety measure to protect the human thyroid gland from the uptake of dangerous radioactive iodine. Another measure is to cover the nose, mouth and eyes with a piece of cloth and sunglasses to protect against alpha particles, which are only an internal hazard.
To support and supplement efforts at national, regional and local level with regard to disaster prevention, the preparedness of those responsible for civil protection and the intervention in the event of disaster
Preparing also includes sharing information:
Response consists first of warning civilians so they can enter fallout shelters and protect assets.
Staffing a response is always full of problems in a civil defense emergency. After an attack, conventional full-time emergency services are dramatically overloaded, with conventional fire fighting response times often exceeding several days. Some capability is maintained by local and state agencies, and an emergency reserve is provided by specialized military units, especially civil affairs, Military Police, Judge Advocates and combat engineers.
However, the traditional response to massed attack on civilian population centers is to maintain a mass-trained force of volunteer emergency workers. Studies in World War II showed that lightly trained (40 hours or less) civilians in organised teams can perform up to 95% of emergency activities when trained, liaised and supported by local government. In this plan, the populace rescues itself from most situations, and provides information to a central office to prioritize professional emergency services.
In the 1990s, this concept was revived by the Los Angeles Fire Department to cope with civil emergencies such as earthquakes. The program was widely adopted, providing standard terms for organization. In the U.S., this is now official federal policy, and it is implemented by community emergency response teams, under the Department of Homeland Security, which certifies training programs by local governments, and registers "certified disaster service workers" who complete such training.
Recovery consists of rebuilding damaged infrastructure, buildings and production. The recovery phase is the longest and ultimately most expensive phase. Once the immediate "crisis" has passed, cooperation fades away and recovery efforts are often politicized or seen as economic opportunities.
Preparation for recovery can be very helpful. If mitigating resources are dispersed before the attack, cascades of social failures can be prevented. One hedge against bridge damage in riverine cities is to subsidize a "tourist ferry" that performs scenic cruises on the river. When a bridge is down, the ferry takes up the load.
Civil Defense is also the name of a number of organizations around the world dedicated to protecting civilians from military attacks, as well as to providing rescue services after natural and human-made disasters alike.
Worldwide protection is managed by the United Nations Office for the Coordination of Humanitarian Affairs (OCHA).
In a few countries such as Jordan and Singapore (see Singapore Civil Defence Force), civil defense is essentially the same organization as the fire brigade. In most countries, however, civil defense is a government-managed, volunteer-staffed organization, separate from the fire brigade and the ambulance service.
As the threat of Cold War eased, a number of such civil defense organizations have been disbanded or mothballed (as in the case of the Royal Observer Corps in the United Kingdom and the United States civil defense), while others have changed their focuses into providing rescue services after natural disasters (as for the State Emergency Service in Australia). However, the ideals of Civil Defense have been brought back in the United States under FEMA's Citizen Corps and Community Emergency Response Team (CERT).
In the United Kingdom Civil Defence work is carried out by Emergency Responders under the Civil Contingencies Act 2004, with assistance from voluntary groups such as RAYNET, Search and Rescue Teams and 4x4 Response. In Ireland, the Civil Defence is still very much an active organization and is occasionally called upon for its Auxiliary Fire Service and ambulance/rescue services when emergencies such as flash flooding occur and require additional manpower. The organization has units of trained firemen and medical responders based in key areas around the country.
UK:
US:
Germany:
General: | [
{
"paragraph_id": 0,
"text": "Civil defense (British English: civil defence) or civil protection is an effort to protect the citizens of a state (generally non-combatants) from human-made and natural disasters. It uses the principles of emergency operations: prevention, mitigation, preparation, response, or emergency evacuation and recovery. Programs of this sort were initially discussed at least as early as the 1920s and were implemented in some countries during the 1930s as the threat of war and aerial bombardment grew. Civil-defense structures became widespread after authorities recognised the threats posed by nuclear weapons.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Since the end of the Cold War, the focus of civil defense has largely shifted from responding to military attack to dealing with emergencies and disasters in general. The new concept is characterised by a number of terms, each of which has its own specific shade of meaning, such as crisis management, emergency management, emergency preparedness, contingency planning, civil contingency, civil aid and civil protection.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Some countries treat civil defense as a key part of defense in general. For example, the Swedish-language word totalförsvar (\"total defense\") refers to the commitment of a wide range of national resources to defense, including the protection of all aspects of civilian life. Some countries have organized civil defense along paramilitary lines, or have incorporated it within armed forces, such as the Soviet Civil Defense Forces (Войска гражданской обороны).",
"title": ""
},
{
"paragraph_id": 3,
"text": "The advent of civil defense was stimulated by the experience of the bombing of civilian areas during the First World War. The bombing of the United Kingdom began on 19 January 1915 when German zeppelins dropped bombs on the Great Yarmouth area, killing six people. German bombing operations of the First World War were surprisingly effective, especially after the Gotha bombers surpassed the zeppelins. The most devastating raids inflicted 121 casualties for each ton of bombs dropped; this figure was then used as a basis for predictions.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "After the war, attention was turned toward civil defense in the event of war, and the Air Raid Precautions Committee (ARP) was established in 1924 to investigate ways for ensuring the protection of civilians from the danger of air-raids.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "The Committee produced figures estimating that in London there would be 9,000 casualties in the first two days and then a continuing rate of 17,500 casualties a week. These rates were thought conservative. It was believed that there would be \"total chaos and panic\" and hysterical neurosis as the people of London would try to flee the city. To control the population harsh measures were proposed: bringing London under almost military control, and physically cordoning off the city with 120,000 troops to force people back to work. A different government department proposed setting up camps for refugees for a few days before sending them back to London.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "A special government department, the Civil Defence Service, was established by the Home Office in 1935. Its remit included the pre-existing ARP as well as wardens, firemen (initially the Auxiliary Fire Service (AFS) and latterly the National Fire Service (NFS)), fire watchers, rescue, first aid post, stretcher party and industry. Over 1.9 million people served within the CD; nearly 2,400 died from enemy action.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "The organization of civil defense was the responsibility of the local authority. Volunteers were ascribed to different units depending on experience or training. Each local civil defense service was divided into several sections. Wardens were responsible for local reconnaissance and reporting, and leadership, organization, guidance and control of the general public. Wardens would also advise survivors of the locations of rest and food centers, and other welfare facilities.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "Rescue Parties were required to assess and then access bombed-out buildings and retrieve injured or dead people. In addition they would turn off gas, electricity and water supplies, and repair or pull down unsteady buildings. Medical services, including First Aid Parties, provided on the spot medical assistance.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "The expected stream of information that would be generated during an attack was handled by 'Report and Control' teams. A local headquarters would have an ARP controller who would direct rescue, first aid and decontamination teams to the scenes of reported bombing. If local services were deemed insufficient to deal with the incident then the controller could request assistance from surrounding boroughs.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "Fire Guards were responsible for a designated area/building and required to monitor the fall of incendiary bombs and pass on news of any fires that had broken out to the NFS. They could deal with an individual magnesium alloy (\"Elektron\") incendiary bomb by dousing it with buckets of sand or water or by smothering. Additionally, 'Gas Decontamination Teams' kitted out with gas-tight and waterproof protective clothing were to deal with any gas attacks. They were trained to decontaminate buildings, roads, rail and other material that had been contaminated by liquid or jelly gases.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "Little progress was made over the issue of air-raid shelters, because of the apparently irreconcilable conflict between the need to send the public underground for shelter and the need to keep them above ground for protection against gas attacks. In February 1936 the Home Secretary appointed a technical Committee on Structural Precautions against Air Attack. During the Munich crisis, local authorities dug trenches to provide shelter. After the crisis, the British Government decided to make these a permanent feature, with a standard design of precast concrete trench lining. They also decided to issue the Anderson shelter free to poorer households and to provide steel props to create shelters in suitable basements.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "During the Second World War, the ARP was responsible for the issuing of gas masks, pre-fabricated air-raid shelters (such as Anderson shelters, as well as Morrison shelters), the upkeep of local public shelters, and the maintenance of the blackout. The ARP also helped rescue people after air raids and other attacks, and some women became ARP Ambulance Attendants whose job was to help administer first aid to casualties, search for survivors, and in many grim instances, help recover bodies, sometimes those of their own colleagues.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "As the war progressed, the military effectiveness of Germany's aerial bombardment was very limited. Thanks to the Luftwaffe's shifting aims, the strength of British air defenses, the use of early warning radar and the life-saving actions of local civil defense units, the aerial \"Blitz\" during the Battle of Britain failed to break the morale of the British people, destroy the Royal Air Force or significantly hinder British industrial production. Despite a significant investment in civil and military defense, British civilian losses during the Blitz were higher than in most strategic bombing campaigns throughout the war. For example, there were 14,000-20,000 UK civilian fatalities during the Battle of Britain, a relatively high number considering that the Luftwaffe dropped only an estimated 30,000 tons of ordinance during the battle. Granted, this resulting 0.47-0.67 civilian fatalities per ton of bombs dropped was lower than the earlier 121 casualties per ton prediction. However, in comparison, Allied strategic bombing of Germany during the war proved slightly less lethal than what was observed in the UK, with an estimated 400,000-600,000 German civilian fatalities for approximately 1.35 million tons of bombs dropped on Germany, an estimated resulting rate therefore of 0.30-0.44 civilian fatalities per ton of bombs dropped.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "In the United States, the Office of Civilian Defense was established in May 1941 to coordinate civilian defense efforts. It coordinated with the Department of the Army and established similar groups to the British ARP. One of these groups that still exists today is the Civil Air Patrol, which was originally created as a civilian auxiliary to the Army. The CAP was created on December 1, 1941, with the main civil defense mission of search and rescue. The CAP also sank two Axis submarines and provided aerial reconnaissance for Allied and neutral merchant ships. In 1946, the Civil Air Patrol was barred from combat by Public Law 79-476. The CAP then received its current mission: search and rescue for downed aircraft. When the Air Force was created, in 1947, the Civil Air Patrol became the auxiliary of the Air Force.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "The Coast Guard Auxiliary performs a similar role in support of the U.S. Coast Guard. Like the Civil Air Patrol, the Coast Guard Auxiliary was established in the run up to World War II. Auxiliarists were sometimes armed during the war, and extensively participated in port security operations. After the war, the Auxiliary shifted its focus to promoting boating safety and assisting the Coast Guard in performing search and rescue and marine safety and environmental protection.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "In the United States a federal civil defense program existed under Public Law 920 of the 81st Congress, as amended, from 1951 to 1994. That statutory scheme was made so-called all-hazards by Public Law 103–160 in 1993 and largely repealed by Public Law 103–337 in 1994. Parts now appear in Title VI of the Robert T. Stafford Disaster Relief and Emergency Assistance Act, Public Law 100-107 [1988 as amended]. The term EMERGENCY PREPAREDNESS was largely codified by that repeal and amendment. See 42 USC Sections 5101 and following.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "In most of the states of the North Atlantic Treaty Organization, such as the United States, the United Kingdom and West Germany, as well as the Soviet Bloc, and especially in the neutral countries, such as Switzerland and in Sweden during the 1950s and 1960s, many civil defense practices took place to prepare for the aftermath of a nuclear war, which seemed quite likely at that time.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "In the United Kingdom, the Civil Defence Service was disbanded in 1945, followed by the ARP in 1946. With the onset of the growing tensions between East and West, the service was revived in 1949 as the Civil Defence Corps. As a civilian volunteer organization, it was tasked to take control in the aftermath of a major national emergency, principally envisaged as being a Cold War nuclear attack. Although under the authority of the Home Office, with a centralized administrative establishment, the corps was administered locally by Corps Authorities. In general every county was a Corps Authority, as were most county boroughs in England and Wales and large burghs in Scotland.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "Each division was divided into several sections, including the Headquarters, Intelligence and Operations, Scientific and Reconnaissance, Warden & Rescue, Ambulance and First Aid and Welfare.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "In 1954 Coventry City Council caused international controversy when it announced plans to disband its Civil Defence committee because the councillors had decided that hydrogen bombs meant that there could be no recovery from a nuclear attack. The British government opposed such a move and held a provocative Civil Defence exercise on the streets of Coventry which Labour council members protested against. The government also decided to implement its own committee at the city's cost until the council reinstituted its committee.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "In the United States, the sheer power of nuclear weapons and the perceived likelihood of such an attack precipitated a greater response than had yet been required of civil defense. Civil defense, previously considered an important and commonsense step, became divisive and controversial in the charged atmosphere of the Cold War. In 1950, the National Security Resources Board created a 162-page document outlining a model civil defense structure for the U.S. Called the \"Blue Book\" by civil defense professionals in reference to its solid blue cover, it was the template for legislation and organization for the next 40 years.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "Perhaps the most memorable aspect of the Cold War civil defense effort was the educational effort made or promoted by the government. In Duck and Cover, Bert the Turtle advocated that children \"duck and cover\" when they \"see the flash.\" Booklets such as Survival Under Atomic Attack, Fallout Protection and Nuclear War Survival Skills were also commonplace. The transcribed radio program Stars for Defense combined hit music with civil defense advice. Government institutes created public service announcements including children's songs and distributed them to radio stations to educate the public in case of nuclear attack.",
"title": "History"
},
{
"paragraph_id": 23,
"text": "The US President Kennedy (1961–63) launched an ambitious effort to install fallout shelters throughout the United States. These shelters would not protect against the blast and heat effects of nuclear weapons, but would provide some protection against the radiation effects that would last for weeks and even affect areas distant from a nuclear explosion. In order for most of these preparations to be effective, there had to be some degree of warning. In 1951, CONELRAD (Control of Electromagnetic Radiation) was established. Under the system, a few primary stations would be alerted of an emergency and would broadcast an alert. All broadcast stations throughout the country would be constantly listening to an upstream station and repeat the message, thus passing it from station to station.",
"title": "History"
},
{
"paragraph_id": 24,
"text": "In a once classified US war game analysis, looking at varying levels of war escalation, warning and pre-emptive attacks in the late 1950s early 1960s, it was estimated that approximately 27 million US citizens would have been saved with civil defense education. At the time, however, the cost of a full-scale civil defense program was regarded as less effective in cost-benefit analysis than a ballistic missile defense (Nike Zeus) system, and as the Soviet adversary was increasing their nuclear stockpile, the efficacy of both would follow a diminishing returns trend.",
"title": "History"
},
{
"paragraph_id": 25,
"text": "Contrary to the largely noncommittal approach taken in NATO, with its stops and starts in civil defense depending on the whims of each newly elected government, the military strategy in the comparatively more ideologically consistent USSR held that, amongst other things, a winnable nuclear war was possible. To this effect the Soviets planned to minimize, as far as possible, the effects of nuclear weapon strikes on its territory, and therefore spent considerably more thought on civil defense preparations than in U.S., with defense plans that have been assessed to be far more effective than those in the U.S.",
"title": "History"
},
{
"paragraph_id": 26,
"text": "Soviet Civil Defense Troops played the main role in the massive disaster relief operation following the 1986 Chernobyl nuclear accident. Defense Troop reservists were officially mobilized (as in a case of war) from throughout the USSR to join the Chernobyl task force and formed on the basis of the Kyiv Civil Defense Brigade. The task force performed some high-risk tasks including, with the failure of their robotic machinery, the manual removal of highly-radioactive debris. Many of their personnel were later decorated with medals for their work at containing the release of radiation into the environment, with a number of the 56 deaths from the accident being Civil defense troops.",
"title": "History"
},
{
"paragraph_id": 27,
"text": "In Western countries, strong civil defense policies were never properly implemented, because it was fundamentally at odds with the doctrine of \"mutual assured destruction\" (MAD) by making provisions for survivors. It was also considered that a full-fledged total defense would have not been worth the very large expense. For whatever reason, the public saw efforts at civil defense as fundamentally ineffective against the powerful destructive forces of nuclear weapons, and therefore a waste of time and money, although detailed scientific research programs did underlie the much-mocked government civil defense pamphlets of the 1950s and 1960s.",
"title": "History"
},
{
"paragraph_id": 28,
"text": "The Civil Defence Corps was stood down in Great Britain in 1968 due to the financial crisis of the mid-1960s. Its neighbors, however, remained committed to Civil Defence, namely the Isle of Man Civil Defence Corps and Civil Defence Ireland (Republic of Ireland).",
"title": "History"
},
{
"paragraph_id": 29,
"text": "In the United States, the various civil defense agencies were replaced with the Federal Emergency Management Agency (FEMA) in 1979. In 2002 this became part of the Department of Homeland Security. The focus was shifted from nuclear war to an \"all-hazards\" approach of Comprehensive Emergency Management. Natural disasters and the emergence of new threats such as terrorism have caused attention to be focused away from traditional civil defense and into new forms of civil protection such as emergency management and homeland security.",
"title": "History"
},
{
"paragraph_id": 30,
"text": "Many countries maintain a national Civil Defence Corps, usually having a wide brief for assisting in large scale civil emergencies such as flood, earthquake, invasion, or civil disorder.",
"title": "History"
},
{
"paragraph_id": 31,
"text": "After the September 11 attacks in 2001, in the United States the concept of civil defense has been revisited under the umbrella term of homeland security and all-hazards emergency management.",
"title": "History"
},
{
"paragraph_id": 32,
"text": "In Europe, the triangle CD logo continues to be widely used. The old U.S. civil defense logo was used in the FEMA logo until 2006 and is hinted at in the United States Civil Air Patrol logo. Created in 1939 by Charles Coiner of the N. W. Ayer Advertising Agency, it was used throughout World War II and the Cold War era. In 2006, the National Emergency Management Association—a U.S. organization made up of state emergency managers—\"officially\" retired the Civil Defense triangle logo, replacing it with a stylised EM (standing for Emergency management). The name and logo, however, continue to be used by Hawaii State Civil Defense and Guam Homeland Security/Office of Civil Defense.",
"title": "History"
},
{
"paragraph_id": 33,
"text": "The term \"civil protection\" is currently widely used within the European Union to refer to government-approved systems and resources tasked with protecting the non-combat population, primarily in the event of natural and technological disasters. For example, the EU's humanitarian aid policy director on the Ebola Crisis, Florika Fink-Hooijer, said that civil protection requires \"not just more resources, but first and foremost better governance of the resources that are available including better synergies between humanitarian aid and civil protection\". In recent years there has been emphasis on preparedness for technological disasters resulting from terrorist attack. Within EU countries the term \"crisis-management\" emphasizes the political and security dimension rather than measures to satisfy the immediate needs of the population.",
"title": "History"
},
{
"paragraph_id": 34,
"text": "In Australia, civil defense is the responsibility of the volunteer-based State Emergency Service. The United Kingdom is seeing a resurgence of Civil Defence with the development of the Joint Civil Aid Corps, which is building on the heritage of both the Civil Defence Services of WW2 and the Civil Defence Corps of the Cold War period. However, the Joint Civil Aid Corps is structured and designed for modern society in the UK, and is probably the on Civil Defence organization that is a registered charity, and not funded through government means.",
"title": "History"
},
{
"paragraph_id": 35,
"text": "In most former Soviet countries civil defense is the responsibility of governmental ministries, such as Russia's Ministry of Emergency Situations.",
"title": "History"
},
{
"paragraph_id": 36,
"text": "Relatively small investments in preparation can speed up recovery by months or years and thereby prevent millions of deaths by hunger, cold and disease. According to human capital theory in economics, a country's population is more valuable than all of the land, factories and other assets that it possesses. People rebuild a country after its destruction, and it is therefore important for the economic security of a country that it protect its people. According to psychology, it is important for people to feel as though they are in control of their own destiny, and preparing for uncertainty via civil defense may help to achieve this.",
"title": "Importance"
},
{
"paragraph_id": 37,
"text": "In the United States, the federal civil defense program was authorized by statute and ran from 1951 to 1994. Originally authorized by Public Law 920 of the 81st Congress, it was repealed by Public Law 93–337 in 1994. Small portions of that statutory scheme were incorporated into the Robert T. Stafford Disaster Relief and Emergency Assistance Act (Public Law 100–707) which partly superseded in part, partly amended, and partly supplemented the Disaster Relief Act of 1974 (Public Law 93-288). In the portions of the civil defense statute incorporated into the Stafford Act, the primary modification was to use the term \"Emergency Preparedness\" wherever the term \"Civil Defence\" had previously appeared in the statutory language.",
"title": "Importance"
},
{
"paragraph_id": 38,
"text": "An important concept initiated by President Jimmy Carter was the so-called \"Crisis Relocation Program\" administered as part of the federal civil defense program. That effort largely lapsed under President Ronald Reagan, who discontinued the Carter initiative because of opposition from areas potentially hosting the relocated population.",
"title": "Importance"
},
{
"paragraph_id": 39,
"text": "Threats to civilians and civilian life include NBC (Nuclear, Biological, and Chemical warfare) and others, like the more modern term CBRN (Chemical Biological Radiological and Nuclear). Threat assessment involves studying each threat so that preventative measures can be built into civilian life.",
"title": "Threat assessment"
},
{
"paragraph_id": 40,
"text": "Refers to conventional explosives. A blast shelter designed to protect only from radiation and fallout would be much more vulnerable to conventional explosives. See also fallout shelter.",
"title": "Threat assessment"
},
{
"paragraph_id": 41,
"text": "Shelter intended to protect against nuclear blast effects would include thick concrete and other sturdy elements which are resistant to conventional explosives. The biggest threats from a nuclear attack are effects from the blast, fires and radiation. One of the most prepared countries for a nuclear attack is Switzerland. Almost every building in Switzerland has an abri (shelter) against the initial nuclear bomb and explosion followed by the fall-out. Because of this, many people use it as a safe to protect valuables, photos, financial information and so on. Switzerland also has air-raid and nuclear-raid sirens in every village.",
"title": "Threat assessment"
},
{
"paragraph_id": 42,
"text": "A \"radiologically enhanced weapon\", or \"dirty bomb\", uses an explosive to spread radioactive material. This is a theoretical risk, and such weapons have not been used by terrorists. Depending on the quantity of the radioactive material, the dangers may be mainly psychological. Toxic effects can be managed by standard hazmat techniques.",
"title": "Threat assessment"
},
{
"paragraph_id": 43,
"text": "The threat here is primarily from disease-causing microorganisms such as bacteria and viruses.",
"title": "Threat assessment"
},
{
"paragraph_id": 44,
"text": "Various chemical agents are a threat, such as nerve gas (VX, Sarin, and so on.).",
"title": "Threat assessment"
},
{
"paragraph_id": 45,
"text": "Mitigation is the process of actively preventing war or the release of nuclear weapons. It includes policy analysis, diplomacy, political measures, nuclear disarmament and more military responses such as a National Missile Defense and air defense artillery. In the case of counter-terrorism, mitigation would include diplomacy, intelligence gathering and direct action against terrorist groups. Mitigation may also be reflected in long-term planning such as the design of the interstate highway system and the placement of military bases further away from populated areas.",
"title": "Stages"
},
{
"paragraph_id": 46,
"text": "Preparation consists of building blast shelters and pre-positioning information, supplies, and emergency infrastructure. For example, most larger cities in the U.S. now have underground emergency operations centers that can perform civil defense coordination. FEMA also has many underground facilities for the same purpose located near major railheads such as the ones in Denton, Texas and Mount Weather, Virginia.",
"title": "Stages"
},
{
"paragraph_id": 47,
"text": "Other measures would include continual government inventories of grain silos, the Strategic National Stockpile, the uncapping of the Strategic Petroleum Reserve, the dispersal of lorry-transportable bridges, water purification, mobile refineries, mobile de-contamination facilities, mobile general and special purpose disaster mortuary facilities such as Disaster Mortuary Operational Response Team (DMORT) and DMORT-WMD, and other aids such as temporary housing to speed civil recovery.",
"title": "Stages"
},
{
"paragraph_id": 48,
"text": "On an individual scale, one means of preparation for exposure to nuclear fallout is to obtain potassium iodide (KI) tablets as a safety measure to protect the human thyroid gland from the uptake of dangerous radioactive iodine. Another measure is to cover the nose, mouth and eyes with a piece of cloth and sunglasses to protect against alpha particles, which are only an internal hazard.",
"title": "Stages"
},
{
"paragraph_id": 49,
"text": "To support and supplement efforts at national, regional and local level with regard to disaster prevention, the preparedness of those responsible for civil protection and the intervention in the event of disaster",
"title": "Stages"
},
{
"paragraph_id": 50,
"text": "Preparing also includes sharing information:",
"title": "Stages"
},
{
"paragraph_id": 51,
"text": "Response consists first of warning civilians so they can enter fallout shelters and protect assets.",
"title": "Stages"
},
{
"paragraph_id": 52,
"text": "Staffing a response is always full of problems in a civil defense emergency. After an attack, conventional full-time emergency services are dramatically overloaded, with conventional fire fighting response times often exceeding several days. Some capability is maintained by local and state agencies, and an emergency reserve is provided by specialized military units, especially civil affairs, Military Police, Judge Advocates and combat engineers.",
"title": "Stages"
},
{
"paragraph_id": 53,
"text": "However, the traditional response to massed attack on civilian population centers is to maintain a mass-trained force of volunteer emergency workers. Studies in World War II showed that lightly trained (40 hours or less) civilians in organised teams can perform up to 95% of emergency activities when trained, liaised and supported by local government. In this plan, the populace rescues itself from most situations, and provides information to a central office to prioritize professional emergency services.",
"title": "Stages"
},
{
"paragraph_id": 54,
"text": "In the 1990s, this concept was revived by the Los Angeles Fire Department to cope with civil emergencies such as earthquakes. The program was widely adopted, providing standard terms for organization. In the U.S., this is now official federal policy, and it is implemented by community emergency response teams, under the Department of Homeland Security, which certifies training programs by local governments, and registers \"certified disaster service workers\" who complete such training.",
"title": "Stages"
},
{
"paragraph_id": 55,
"text": "",
"title": "Stages"
},
{
"paragraph_id": 56,
"text": "Recovery consists of rebuilding damaged infrastructure, buildings and production. The recovery phase is the longest and ultimately most expensive phase. Once the immediate \"crisis\" has passed, cooperation fades away and recovery efforts are often politicized or seen as economic opportunities.",
"title": "Stages"
},
{
"paragraph_id": 57,
"text": "Preparation for recovery can be very helpful. If mitigating resources are dispersed before the attack, cascades of social failures can be prevented. One hedge against bridge damage in riverine cities is to subsidize a \"tourist ferry\" that performs scenic cruises on the river. When a bridge is down, the ferry takes up the load.",
"title": "Stages"
},
{
"paragraph_id": 58,
"text": "Civil Defense is also the name of a number of organizations around the world dedicated to protecting civilians from military attacks, as well as to providing rescue services after natural and human-made disasters alike.",
"title": "Civil defense organizations"
},
{
"paragraph_id": 59,
"text": "Worldwide protection is managed by the United Nations Office for the Coordination of Humanitarian Affairs (OCHA).",
"title": "Civil defense organizations"
},
{
"paragraph_id": 60,
"text": "In a few countries such as Jordan and Singapore (see Singapore Civil Defence Force), civil defense is essentially the same organization as the fire brigade. In most countries, however, civil defense is a government-managed, volunteer-staffed organization, separate from the fire brigade and the ambulance service.",
"title": "Civil defense organizations"
},
{
"paragraph_id": 61,
"text": "As the threat of Cold War eased, a number of such civil defense organizations have been disbanded or mothballed (as in the case of the Royal Observer Corps in the United Kingdom and the United States civil defense), while others have changed their focuses into providing rescue services after natural disasters (as for the State Emergency Service in Australia). However, the ideals of Civil Defense have been brought back in the United States under FEMA's Citizen Corps and Community Emergency Response Team (CERT).",
"title": "Civil defense organizations"
},
{
"paragraph_id": 62,
"text": "In the United Kingdom Civil Defence work is carried out by Emergency Responders under the Civil Contingencies Act 2004, with assistance from voluntary groups such as RAYNET, Search and Rescue Teams and 4x4 Response. In Ireland, the Civil Defence is still very much an active organization and is occasionally called upon for its Auxiliary Fire Service and ambulance/rescue services when emergencies such as flash flooding occur and require additional manpower. The organization has units of trained firemen and medical responders based in key areas around the country.",
"title": "Civil defense organizations"
},
{
"paragraph_id": 63,
"text": "UK:",
"title": "Civil defense organizations"
},
{
"paragraph_id": 64,
"text": "US:",
"title": "Civil defense organizations"
},
{
"paragraph_id": 65,
"text": "Germany:",
"title": "Civil defense organizations"
},
{
"paragraph_id": 66,
"text": "General:",
"title": "See also"
}
] | Civil defense or civil protection is an effort to protect the citizens of a state from human-made and natural disasters. It uses the principles of emergency operations: prevention, mitigation, preparation, response, or emergency evacuation and recovery. Programs of this sort were initially discussed at least as early as the 1920s and were implemented in some countries during the 1930s as the threat of war and aerial bombardment grew. Civil-defense structures became widespread after authorities recognised the threats posed by nuclear weapons. Since the end of the Cold War, the focus of civil defense has largely shifted from responding to military attack to dealing with emergencies and disasters in general. The new concept is characterised by a number of terms, each of which has its own specific shade of meaning, such as crisis management, emergency management, emergency preparedness, contingency planning, civil contingency, civil aid and civil protection. Some countries treat civil defense as a key part of defense in general. For example, the Swedish-language word totalförsvar refers to the commitment of a wide range of national resources to defense, including the protection of all aspects of civilian life. Some countries have organized civil defense along paramilitary lines, or have incorporated it within armed forces, such as the Soviet Civil Defense Forces. | 2001-11-10T08:01:30Z | 2023-12-31T13:10:46Z | [
"Template:Dead link",
"Template:Ill",
"Template:Reflist",
"Template:Cite book",
"Template:Cite web",
"Template:Webarchive",
"Template:Quantify",
"Template:Dubious",
"Template:Citation",
"Template:Authority control",
"Template:Commons category",
"Template:Main",
"Template:See also",
"Template:Cite journal",
"Template:Cite magazine",
"Template:In lang",
"Template:Lang",
"Template:Cmn",
"Template:Cite encyclopedia",
"Template:Short description",
"Template:For",
"Template:Use British English",
"Template:Lang-en",
"Template:Clarify",
"Template:Cite news",
"Template:Subterranea"
] | https://en.wikipedia.org/wiki/Civil_defense |
7,060 | Chymotrypsin | Chymotrypsin (EC 3.4.21.1, chymotrypsins A and B, alpha-chymar ophth, avazyme, chymar, chymotest, enzeon, quimar, quimotrase, alpha-chymar, alpha-chymotrypsin A, alpha-chymotrypsin) is a digestive enzyme component of pancreatic juice acting in the duodenum, where it performs proteolysis, the breakdown of proteins and polypeptides. Chymotrypsin preferentially cleaves peptide amide bonds where the side chain of the amino acid N-terminal to the scissile amide bond (the P1 position) is a large hydrophobic amino acid (tyrosine, tryptophan, and phenylalanine). These amino acids contain an aromatic ring in their side chain that fits into a hydrophobic pocket (the S1 position) of the enzyme. It is activated in the presence of trypsin. The hydrophobic and shape complementarity between the peptide substrate P1 side chain and the enzyme S1 binding cavity accounts for the substrate specificity of this enzyme. Chymotrypsin also hydrolyzes other amide bonds in peptides at slower rates, particularly those containing leucine at the P1 position.
Structurally, it is the archetypal structure for its superfamily, the PA clan of proteases.
Chymotrypsin is synthesized in the pancreas. Its precursor is chymotrypsinogen. Trypsin activates chymotrypsinogen by cleaving peptidic bonds in positions Arg15 – Ile16 and produces π-chymotrypsin. In turn, aminic group (-NH3) of the Ile16 residue interacts with the side chain of Asp194, producing the "oxyanion hole" and the hydrophobic "S1 pocket". Moreover, chymotrypsin induces its own activation by cleaving in positions 14–15, 146–147, and 148–149, producing α-chymotrypsin (which is more active and stable than π-chymotrypsin). The resulting molecule is a three-polypeptide molecule interconnected via disulfide bonds.
In vivo, chymotrypsin is a proteolytic enzyme (serine protease) acting in the digestive systems of many organisms. It facilitates the cleavage of peptide bonds by a hydrolysis reaction, which despite being thermodynamically favorable, occurs extremely slowly in the absence of a catalyst. The main substrates of chymotrypsin are peptide bonds in which the amino acid N-terminal to the bond is a tryptophan, tyrosine, phenylalanine, or leucine. Like many proteases, chymotrypsin also hydrolyses amide bonds in vitro, a virtue that enabled the use of substrate analogs such as N-acetyl-L-phenylalanine p-nitrophenyl amide for enzyme assays.
Chymotrypsin cleaves peptide bonds by attacking the unreactive carbonyl group with a powerful nucleophile, the serine 195 residue located in the active site of the enzyme, which briefly becomes covalently bonded to the substrate, forming an enzyme-substrate intermediate. Along with histidine 57 and aspartic acid 102, this serine residue constitutes the catalytic triad of the active site. These findings rely on inhibition assays and the study of the kinetics of cleavage of the aforementioned substrate, exploiting the fact that the enzyme-substrate intermediate p-nitrophenolate has a yellow colour, enabling measurement of its concentration by measuring light absorbance at 410 nm.
Chymotrypsin catalysis of the hydrolysis of a protein substrate (in red) is performed in two steps. First, the nucleophilicity of Ser-195 is enhanced by general-base catalysis in which the proton of the serine hydroxyl group is transferred to the imidazole moiety of His-57 during its attack on the electron-deficient carbonyl carbon of the protein-substrate main chain (k1 step). This occurs via the concerted action of the three-amino-acid residues in the catalytic triad. The buildup of negative charge on the resultant tetrahedral intermediate is stabilized in the enzyme's active site's oxyanion hole, by formation of two hydrogen bonds to adjacent main-chain amide-hydrogens.
The His-57 imidazolium moiety formed in the k1 step is a general acid catalyst for the k-1 reaction. However, evidence for similar general-acid catalysis of the k2 reaction (Tet2) has been controverted; apparently water provides a proton to the amine leaving group.
Breakdown of Tet1 (via k3) generates an acyl enzyme, which is hydrolyzed with His-57 acting as a general base (kH2O) in formation of a tetrahedral intermediate, that breaks down to regenerate the serine hydroxyl moiety, as well as the protein fragment with the newly formed carboxyl terminus. | [
{
"paragraph_id": 0,
"text": "Chymotrypsin (EC 3.4.21.1, chymotrypsins A and B, alpha-chymar ophth, avazyme, chymar, chymotest, enzeon, quimar, quimotrase, alpha-chymar, alpha-chymotrypsin A, alpha-chymotrypsin) is a digestive enzyme component of pancreatic juice acting in the duodenum, where it performs proteolysis, the breakdown of proteins and polypeptides. Chymotrypsin preferentially cleaves peptide amide bonds where the side chain of the amino acid N-terminal to the scissile amide bond (the P1 position) is a large hydrophobic amino acid (tyrosine, tryptophan, and phenylalanine). These amino acids contain an aromatic ring in their side chain that fits into a hydrophobic pocket (the S1 position) of the enzyme. It is activated in the presence of trypsin. The hydrophobic and shape complementarity between the peptide substrate P1 side chain and the enzyme S1 binding cavity accounts for the substrate specificity of this enzyme. Chymotrypsin also hydrolyzes other amide bonds in peptides at slower rates, particularly those containing leucine at the P1 position.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Structurally, it is the archetypal structure for its superfamily, the PA clan of proteases.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Chymotrypsin is synthesized in the pancreas. Its precursor is chymotrypsinogen. Trypsin activates chymotrypsinogen by cleaving peptidic bonds in positions Arg15 – Ile16 and produces π-chymotrypsin. In turn, aminic group (-NH3) of the Ile16 residue interacts with the side chain of Asp194, producing the \"oxyanion hole\" and the hydrophobic \"S1 pocket\". Moreover, chymotrypsin induces its own activation by cleaving in positions 14–15, 146–147, and 148–149, producing α-chymotrypsin (which is more active and stable than π-chymotrypsin). The resulting molecule is a three-polypeptide molecule interconnected via disulfide bonds.",
"title": "Activation"
},
{
"paragraph_id": 3,
"text": "In vivo, chymotrypsin is a proteolytic enzyme (serine protease) acting in the digestive systems of many organisms. It facilitates the cleavage of peptide bonds by a hydrolysis reaction, which despite being thermodynamically favorable, occurs extremely slowly in the absence of a catalyst. The main substrates of chymotrypsin are peptide bonds in which the amino acid N-terminal to the bond is a tryptophan, tyrosine, phenylalanine, or leucine. Like many proteases, chymotrypsin also hydrolyses amide bonds in vitro, a virtue that enabled the use of substrate analogs such as N-acetyl-L-phenylalanine p-nitrophenyl amide for enzyme assays.",
"title": "Mechanism of action and kinetics"
},
{
"paragraph_id": 4,
"text": "Chymotrypsin cleaves peptide bonds by attacking the unreactive carbonyl group with a powerful nucleophile, the serine 195 residue located in the active site of the enzyme, which briefly becomes covalently bonded to the substrate, forming an enzyme-substrate intermediate. Along with histidine 57 and aspartic acid 102, this serine residue constitutes the catalytic triad of the active site. These findings rely on inhibition assays and the study of the kinetics of cleavage of the aforementioned substrate, exploiting the fact that the enzyme-substrate intermediate p-nitrophenolate has a yellow colour, enabling measurement of its concentration by measuring light absorbance at 410 nm.",
"title": "Mechanism of action and kinetics"
},
{
"paragraph_id": 5,
"text": "Chymotrypsin catalysis of the hydrolysis of a protein substrate (in red) is performed in two steps. First, the nucleophilicity of Ser-195 is enhanced by general-base catalysis in which the proton of the serine hydroxyl group is transferred to the imidazole moiety of His-57 during its attack on the electron-deficient carbonyl carbon of the protein-substrate main chain (k1 step). This occurs via the concerted action of the three-amino-acid residues in the catalytic triad. The buildup of negative charge on the resultant tetrahedral intermediate is stabilized in the enzyme's active site's oxyanion hole, by formation of two hydrogen bonds to adjacent main-chain amide-hydrogens.",
"title": "Mechanism of action and kinetics"
},
{
"paragraph_id": 6,
"text": "The His-57 imidazolium moiety formed in the k1 step is a general acid catalyst for the k-1 reaction. However, evidence for similar general-acid catalysis of the k2 reaction (Tet2) has been controverted; apparently water provides a proton to the amine leaving group.",
"title": "Mechanism of action and kinetics"
},
{
"paragraph_id": 7,
"text": "Breakdown of Tet1 (via k3) generates an acyl enzyme, which is hydrolyzed with His-57 acting as a general base (kH2O) in formation of a tetrahedral intermediate, that breaks down to regenerate the serine hydroxyl moiety, as well as the protein fragment with the newly formed carboxyl terminus.",
"title": "Mechanism of action and kinetics"
}
] | Chymotrypsin (EC 3.4.21.1, chymotrypsins A and B, alpha-chymar ophth, avazyme, chymar, chymotest, enzeon, quimar, quimotrase, alpha-chymar, alpha-chymotrypsin A, alpha-chymotrypsin) is a digestive enzyme component of pancreatic juice acting in the duodenum, where it performs proteolysis, the breakdown of proteins and polypeptides. Chymotrypsin preferentially cleaves peptide amide bonds where the side chain of the amino acid N-terminal to the scissile amide bond (the P1 position) is a large hydrophobic amino acid (tyrosine, tryptophan, and phenylalanine). These amino acids contain an aromatic ring in their side chain that fits into a hydrophobic pocket (the S1 position) of the enzyme. It is activated in the presence of trypsin. The hydrophobic and shape complementarity between the peptide substrate P1 side chain and the enzyme S1 binding cavity accounts for the substrate specificity of this enzyme. Chymotrypsin also hydrolyzes other amide bonds in peptides at slower rates, particularly those containing leucine at the P1 position. Structurally, it is the archetypal structure for its superfamily, the PA clan of proteases. | 2002-02-25T15:43:11Z | 2023-12-25T21:14:32Z | [
"Template:Missing information",
"Template:Reflist",
"Template:MeshName",
"Template:Serine endopeptidases",
"Template:Enzymes",
"Template:Clear",
"Template:Cite book",
"Template:Cite journal",
"Template:Infobox enzyme",
"Template:EC number",
"Template:Infobox protein",
"Template:Citation",
"Template:Refend",
"Template:Portal bar",
"Template:Short description",
"Template:See also",
"Template:Refbegin",
"Template:Authority control"
] | https://en.wikipedia.org/wiki/Chymotrypsin |
7,061 | Community emergency response team | In the United States, community emergency response team (CERT) can refer to
Sometimes programs and organizations take different names, such as Neighborhood Emergency Response Team (NERT), or Neighborhood Emergency Team (NET).
The concept of civilian auxiliaries is similar to civil defense, which has a longer history. The CERT concept differs because it includes nonmilitary emergencies, and is coordinated with all levels of emergency authorities, local to national, via an overarching incident command system.
In 2022 the CERT program moved under FEMA's Community Preparedness umbrella along with the Youth Preparedness Council.
A local government agency, often a fire department, police department, or emergency management agency, agrees to sponsor CERT within its jurisdiction. The sponsoring agency liaises with, deploys and may train or supervise the training of CERT members. Many sponsoring agencies employ a full-time community-service person as liaison to the CERT members. In some communities, the liaison is a volunteer and CERT member.
As people are trained and agree to join the community emergency response effort, a CERT is formed. Initial efforts may result in a team with only a few members from across the community. As the number of members grow, a single community-wide team may subdivide. Multiple CERTs are organized into a hierarchy of teams consistent with ICS principles. This follows the Incident Command System (ICS) principle of Span of control until the ideal distribution is achieved: one or more teams are formed at each neighborhood within a community.
A Teen Community Emergency Response Team (TEEN CERT), or Student Emergency Response Team (SERT), can be formed from any group of teens. A Teen CERT can be formed as a school club, service organization, Venturing Crew, Explorer Post, or the training can be added to a school's graduation curriculum. Some CERTs form a club or service corporation, and recruit volunteers to perform training on behalf of the sponsoring agency. This reduces the financial and human resource burden on the sponsoring agency.
When not responding to disasters or large emergencies, CERTs may
Some sponsoring agencies use state and federal grants to purchase response tools and equipment for their members and team(s) (subject to Stafford Act limitations). Most CERTs also acquire their own supplies, tools, and equipment. As community members, CERTs are aware of the specific needs of their community and equip the teams accordingly.
The basic idea is to use CERT to perform the large number of tasks needed in emergencies. This frees highly trained professional responders for more technical tasks. Much of CERT training concerns the Incident Command System and organization, so CERT members fit easily into larger command structures.
A team member may self-activate (self-deploy) when their own neighborhood is affected by disaster or when an incident takes place at their current location (ex. home, work, school, church, or if an accident occurred in front of them). They should not hear about an incident and drive or respond to an event unless told to do so by their team member or sponsoring agency (as specified in chapters 1 and 6 of the basic CERT Training). An effort is made to report their response status to the sponsoring agency. A self-activated team will size-up the loss in their neighborhood and begin performing the skills they have learned to minimize further loss of life, property, and environment. They will continue to respond safely until redirected or relieved by the sponsoring agency or professional responders on-scene.
Teams in neighborhoods not affected by disaster may be deployed or activated by the sponsoring agency. The sponsoring agency may communicate with neighborhood CERT leaders through an organic communication team. In some areas the communications may be by amateur radio, FRS, GMRS or MURS radio, dedicated telephone or fire-alarm networks. In other areas, relays of bicycle-equipped runners can effectively carry messages between the teams and the local emergency operations center.
The sponsoring agency may activate and dispatch teams in order to gather or respond to intelligence about an incident. Teams may be dispatched to affected neighborhoods, or organized to support operations. CERT members may augment support staff at an Incident Command Post or Emergency Operations Center. Additional teams may also be created to guard a morgue, locate supplies and food, convey messages to and from other CERTs and local authorities, and other duties on an as-needed basis as identified by the team leader.
In the short term, CERTs perform data gathering, especially to locate mass-casualties requiring professional response, or situations requiring professional rescues, simple fire-fighting tasks (for example, small fires, turning off gas), light search and rescue, damage evaluation of structures, triage and first aid. In the longer term, CERTs may assist in the evacuation of residents, or assist with setting up a neighborhood shelter.
While responding, CERT members are temporary volunteer government workers. In some areas, (such as California, Hawaii and Kansas) registered, activated CERT members are eligible for worker's compensation for on-the-job injuries during declared disasters.
The Federal Emergency Management Agency (FEMA) recommends that the standard, minimum ten-person team be comprised as follows:
Because every CERT member in a community receives the same core instruction, any team member has the training necessary to assume any of these roles. This is important during a disaster response because not all members of a regular team may be available to respond. Hasty teams may be formed by whichever members are responding at the time. Additionally, members may need to adjust team roles due to stress, fatigue, injury, or other circumstances.
While state and local jurisdictions will implement training in the manner that best suits the community, FEMA's National CERT Program has an established curriculum. Jurisdictions may augment the training, but are strongly encouraged to deliver the entire core content. The CERT core curriculum for the basic course is composed of the following nine units (time is instructional hours):
CERT training emphasizes safely "doing the most good for the most people as quickly as possible" when responding to a disaster. For this reason, cardiopulmonary resuscitation (CPR) training is not included in the core curriculum, as it is time and responder intensive in a mass-casualty incident. However, many jurisdictions encourage or require CERT members to obtain CPR training. Many CERT programs provide or encourage members to take additional first aid training. Some CERT members may also take training to become a certified first responder or emergency medical technician.
Many CERT programs also provide training in amateur radio operation, shelter operations, flood response, community relations, mass care, the incident command system (ICS), and the National Incident Management System (NIMS).
Each unit of CERT training is ideally delivered by professional responders or other experts in the field addressed by the unit. This is done to help build unity between CERT members and responders, keep the attention of students, and help the professional response organizations be comfortable with the training which CERT members receive.
Each course of instruction is ideally facilitated by one or more instructors certified in the CERT curriculum by the state or sponsoring agency. Facilitating instructors provide continuity between units, and help ensure that the CERT core curriculum is being delivered successfully. Facilitating instructors also perform set-up and tear-down of the classroom, provide instructional materials for the course, record student attendance and other tasks which assist the professional responder in delivering their unit as efficiently as possible.
CERT training is provided free to interested members of the community, and is delivered in a group classroom setting. People may complete the training without obligation to join a CERT. Citizen Corps grant funds can be used to print and provide each student with a printed manual. Some sponsoring agencies use Citizen Corps grant funds to purchase disaster response tool kits. These kits are offered as an incentive to join a CERT, and must be returned to the sponsoring agency when members resign from CERT.
Some sponsoring agencies require a criminal background-check of all trainees before allowing them to participate on a CERT. For example, the city of Albuquerque, New Mexico require all volunteers to pass a background check, while the city of Austin, Texas does not require a background check to take part in training classes but requires members to undergo a background check in order to receive a CERT badge and directly assist first responders during an activation of the Emergency Operations Center. However, most programs do not require a criminal background check in order to participate.
The CERT curriculum (including the Train-the-Trainer and Program Manager courses) was updated in 2019 to reflect feedback from instructors across the nation. | [
{
"paragraph_id": 0,
"text": "In the United States, community emergency response team (CERT) can refer to",
"title": ""
},
{
"paragraph_id": 1,
"text": "Sometimes programs and organizations take different names, such as Neighborhood Emergency Response Team (NERT), or Neighborhood Emergency Team (NET).",
"title": ""
},
{
"paragraph_id": 2,
"text": "The concept of civilian auxiliaries is similar to civil defense, which has a longer history. The CERT concept differs because it includes nonmilitary emergencies, and is coordinated with all levels of emergency authorities, local to national, via an overarching incident command system.",
"title": ""
},
{
"paragraph_id": 3,
"text": "In 2022 the CERT program moved under FEMA's Community Preparedness umbrella along with the Youth Preparedness Council.",
"title": ""
},
{
"paragraph_id": 4,
"text": "A local government agency, often a fire department, police department, or emergency management agency, agrees to sponsor CERT within its jurisdiction. The sponsoring agency liaises with, deploys and may train or supervise the training of CERT members. Many sponsoring agencies employ a full-time community-service person as liaison to the CERT members. In some communities, the liaison is a volunteer and CERT member.",
"title": "Organization"
},
{
"paragraph_id": 5,
"text": "As people are trained and agree to join the community emergency response effort, a CERT is formed. Initial efforts may result in a team with only a few members from across the community. As the number of members grow, a single community-wide team may subdivide. Multiple CERTs are organized into a hierarchy of teams consistent with ICS principles. This follows the Incident Command System (ICS) principle of Span of control until the ideal distribution is achieved: one or more teams are formed at each neighborhood within a community.",
"title": "Organization"
},
{
"paragraph_id": 6,
"text": "A Teen Community Emergency Response Team (TEEN CERT), or Student Emergency Response Team (SERT), can be formed from any group of teens. A Teen CERT can be formed as a school club, service organization, Venturing Crew, Explorer Post, or the training can be added to a school's graduation curriculum. Some CERTs form a club or service corporation, and recruit volunteers to perform training on behalf of the sponsoring agency. This reduces the financial and human resource burden on the sponsoring agency.",
"title": "Organization"
},
{
"paragraph_id": 7,
"text": "When not responding to disasters or large emergencies, CERTs may",
"title": "Organization"
},
{
"paragraph_id": 8,
"text": "Some sponsoring agencies use state and federal grants to purchase response tools and equipment for their members and team(s) (subject to Stafford Act limitations). Most CERTs also acquire their own supplies, tools, and equipment. As community members, CERTs are aware of the specific needs of their community and equip the teams accordingly.",
"title": "Organization"
},
{
"paragraph_id": 9,
"text": "The basic idea is to use CERT to perform the large number of tasks needed in emergencies. This frees highly trained professional responders for more technical tasks. Much of CERT training concerns the Incident Command System and organization, so CERT members fit easily into larger command structures.",
"title": "Response"
},
{
"paragraph_id": 10,
"text": "A team member may self-activate (self-deploy) when their own neighborhood is affected by disaster or when an incident takes place at their current location (ex. home, work, school, church, or if an accident occurred in front of them). They should not hear about an incident and drive or respond to an event unless told to do so by their team member or sponsoring agency (as specified in chapters 1 and 6 of the basic CERT Training). An effort is made to report their response status to the sponsoring agency. A self-activated team will size-up the loss in their neighborhood and begin performing the skills they have learned to minimize further loss of life, property, and environment. They will continue to respond safely until redirected or relieved by the sponsoring agency or professional responders on-scene.",
"title": "Response"
},
{
"paragraph_id": 11,
"text": "Teams in neighborhoods not affected by disaster may be deployed or activated by the sponsoring agency. The sponsoring agency may communicate with neighborhood CERT leaders through an organic communication team. In some areas the communications may be by amateur radio, FRS, GMRS or MURS radio, dedicated telephone or fire-alarm networks. In other areas, relays of bicycle-equipped runners can effectively carry messages between the teams and the local emergency operations center.",
"title": "Response"
},
{
"paragraph_id": 12,
"text": "The sponsoring agency may activate and dispatch teams in order to gather or respond to intelligence about an incident. Teams may be dispatched to affected neighborhoods, or organized to support operations. CERT members may augment support staff at an Incident Command Post or Emergency Operations Center. Additional teams may also be created to guard a morgue, locate supplies and food, convey messages to and from other CERTs and local authorities, and other duties on an as-needed basis as identified by the team leader.",
"title": "Response"
},
{
"paragraph_id": 13,
"text": "In the short term, CERTs perform data gathering, especially to locate mass-casualties requiring professional response, or situations requiring professional rescues, simple fire-fighting tasks (for example, small fires, turning off gas), light search and rescue, damage evaluation of structures, triage and first aid. In the longer term, CERTs may assist in the evacuation of residents, or assist with setting up a neighborhood shelter.",
"title": "Response"
},
{
"paragraph_id": 14,
"text": "While responding, CERT members are temporary volunteer government workers. In some areas, (such as California, Hawaii and Kansas) registered, activated CERT members are eligible for worker's compensation for on-the-job injuries during declared disasters.",
"title": "Response"
},
{
"paragraph_id": 15,
"text": "The Federal Emergency Management Agency (FEMA) recommends that the standard, minimum ten-person team be comprised as follows:",
"title": "Member roles"
},
{
"paragraph_id": 16,
"text": "Because every CERT member in a community receives the same core instruction, any team member has the training necessary to assume any of these roles. This is important during a disaster response because not all members of a regular team may be available to respond. Hasty teams may be formed by whichever members are responding at the time. Additionally, members may need to adjust team roles due to stress, fatigue, injury, or other circumstances.",
"title": "Member roles"
},
{
"paragraph_id": 17,
"text": "While state and local jurisdictions will implement training in the manner that best suits the community, FEMA's National CERT Program has an established curriculum. Jurisdictions may augment the training, but are strongly encouraged to deliver the entire core content. The CERT core curriculum for the basic course is composed of the following nine units (time is instructional hours):",
"title": "Training"
},
{
"paragraph_id": 18,
"text": "CERT training emphasizes safely \"doing the most good for the most people as quickly as possible\" when responding to a disaster. For this reason, cardiopulmonary resuscitation (CPR) training is not included in the core curriculum, as it is time and responder intensive in a mass-casualty incident. However, many jurisdictions encourage or require CERT members to obtain CPR training. Many CERT programs provide or encourage members to take additional first aid training. Some CERT members may also take training to become a certified first responder or emergency medical technician.",
"title": "Training"
},
{
"paragraph_id": 19,
"text": "Many CERT programs also provide training in amateur radio operation, shelter operations, flood response, community relations, mass care, the incident command system (ICS), and the National Incident Management System (NIMS).",
"title": "Training"
},
{
"paragraph_id": 20,
"text": "Each unit of CERT training is ideally delivered by professional responders or other experts in the field addressed by the unit. This is done to help build unity between CERT members and responders, keep the attention of students, and help the professional response organizations be comfortable with the training which CERT members receive.",
"title": "Training"
},
{
"paragraph_id": 21,
"text": "Each course of instruction is ideally facilitated by one or more instructors certified in the CERT curriculum by the state or sponsoring agency. Facilitating instructors provide continuity between units, and help ensure that the CERT core curriculum is being delivered successfully. Facilitating instructors also perform set-up and tear-down of the classroom, provide instructional materials for the course, record student attendance and other tasks which assist the professional responder in delivering their unit as efficiently as possible.",
"title": "Training"
},
{
"paragraph_id": 22,
"text": "CERT training is provided free to interested members of the community, and is delivered in a group classroom setting. People may complete the training without obligation to join a CERT. Citizen Corps grant funds can be used to print and provide each student with a printed manual. Some sponsoring agencies use Citizen Corps grant funds to purchase disaster response tool kits. These kits are offered as an incentive to join a CERT, and must be returned to the sponsoring agency when members resign from CERT.",
"title": "Training"
},
{
"paragraph_id": 23,
"text": "Some sponsoring agencies require a criminal background-check of all trainees before allowing them to participate on a CERT. For example, the city of Albuquerque, New Mexico require all volunteers to pass a background check, while the city of Austin, Texas does not require a background check to take part in training classes but requires members to undergo a background check in order to receive a CERT badge and directly assist first responders during an activation of the Emergency Operations Center. However, most programs do not require a criminal background check in order to participate.",
"title": "Training"
},
{
"paragraph_id": 24,
"text": "The CERT curriculum (including the Train-the-Trainer and Program Manager courses) was updated in 2019 to reflect feedback from instructors across the nation.",
"title": "Training"
}
] | In the United States, community emergency response team (CERT) can refer to an implementation of FEMA's National CERT Program, administered by a local sponsoring agency, which provides a standardized training and implementation framework to community members;
an organization of volunteer emergency workers who have received specific training in basic disaster response skills, and who agree to supplement existing emergency responders in the event of a major disaster. Sometimes programs and organizations take different names, such as Neighborhood Emergency Response Team (NERT), or Neighborhood Emergency Team (NET). The concept of civilian auxiliaries is similar to civil defense, which has a longer history. The CERT concept differs because it includes nonmilitary emergencies, and is coordinated with all levels of emergency authorities, local to national, via an overarching incident command system. In 2022 the CERT program moved under FEMA's Community Preparedness umbrella along with the Youth Preparedness Council. | 2002-02-25T15:51:15Z | 2023-12-01T19:02:26Z | [
"Template:Citizen Corps partners",
"Template:Infobox organization",
"Template:Snd",
"Template:Reflist",
"Template:Cite web"
] | https://en.wikipedia.org/wiki/Community_emergency_response_team |
7,063 | Catapult | A catapult is a ballistic device used to launch a projectile a great distance without the aid of gunpowder or other propellants – particularly various types of ancient and medieval siege engines. A catapult uses the sudden release of stored potential energy to propel its payload. Most convert tension or torsion energy that was more slowly and manually built up within the device before release, via springs, bows, twisted rope, elastic, or any of numerous other materials and mechanisms.
In use since ancient times, the catapult has proven to be one of the most persistently effective mechanisms in warfare. In modern times the term can apply to devices ranging from a simple hand-held implement (also called a "slingshot") to a mechanism for launching aircraft from a ship.
The earliest catapults date to at least the 7th century BC, with King Uzziah, of Judah, recorded as equipping the walls of Jerusalem with machines that shot "great stones". Catapults are mentioned in Yajurveda under the name "Jyah" in chapter 30, verse 7. In the 5th century BC the mangonel appeared in ancient China, a type of traction trebuchet and catapult. Early uses were also attributed to Ajatashatru of Magadha in his, 5th century BC, war against the Licchavis. Greek catapults were invented in the early 4th century BC, being attested by Diodorus Siculus as part of the equipment of a Greek army in 399 BC, and subsequently used at the siege of Motya in 397 BC.
The word 'catapult' comes from the Latin 'catapulta', which in turn comes from the Greek Ancient Greek: καταπέλτης (katapeltēs), itself from κατά (kata), "downwards" and πάλλω (pallō), "to toss, to hurl". Catapults were invented by the ancient Greeks and in ancient India where they were used by the Magadhan Emperor Ajatashatru around the early to mid 5th century BC.
The catapult and crossbow in Greece are closely intertwined. Primitive catapults were essentially "the product of relatively straightforward attempts to increase the range and penetrating power of missiles by strengthening the bow which propelled them". The historian Diodorus Siculus (fl. 1st century BC), described the invention of a mechanical arrow-firing catapult (katapeltikon) by a Greek task force in 399 BC. The weapon was soon after employed against Motya (397 BC), a key Carthaginian stronghold in Sicily. Diodorus is assumed to have drawn his description from the highly rated history of Philistus, a contemporary of the events then. The introduction of crossbows however, can be dated further back: according to the inventor Hero of Alexandria (fl. 1st century AD), who referred to the now lost works of the 3rd-century BC engineer Ctesibius, this weapon was inspired by an earlier foot-held crossbow, called the gastraphetes, which could store more energy than the Greek bows. A detailed description of the gastraphetes, or the "belly-bow", along with a watercolor drawing, is found in Heron's technical treatise Belopoeica.
A third Greek author, Biton (fl. 2nd century BC), whose reliability has been positively reevaluated by recent scholarship, described two advanced forms of the gastraphetes, which he credits to Zopyros, an engineer from southern Italy. Zopyrus has been plausibly equated with a Pythagorean of that name who seems to have flourished in the late 5th century BC. He probably designed his bow-machines on the occasion of the sieges of Cumae and Milet between 421 BC and 401 BC. The bows of these machines already featured a winched pull back system and could apparently throw two missiles at once.
Philo of Byzantium provides probably the most detailed account on the establishment of a theory of belopoietics (belos = "projectile"; poietike = "(art) of making") circa 200 BC. The central principle to this theory was that "all parts of a catapult, including the weight or length of the projectile, were proportional to the size of the torsion springs". This kind of innovation is indicative of the increasing rate at which geometry and physics were being assimilated into military enterprises.
From the mid-4th century BC onwards, evidence of the Greek use of arrow-shooting machines becomes more dense and varied: arrow firing machines (katapaltai) are briefly mentioned by Aeneas Tacticus in his treatise on siegecraft written around 350 BC. An extant inscription from the Athenian arsenal, dated between 338 and 326 BC, lists a number of stored catapults with shooting bolts of varying size and springs of sinews. The later entry is particularly noteworthy as it constitutes the first clear evidence for the switch to torsion catapults, which are more powerful than the more-flexible crossbows and which came to dominate Greek and Roman artillery design thereafter. This move to torsion springs was likely spurred by the engineers of Philip II of Macedonia. Another Athenian inventory from 330 to 329 BC includes catapult bolts with heads and flights. As the use of catapults became more commonplace, so did the training required to operate them. Many Greek children were instructed in catapult usage, as evidenced by "a 3rd Century B.C. inscription from the island of Ceos in the Cyclades [regulating] catapult shooting competitions for the young". Arrow firing machines in action are reported from Philip II's siege of Perinth (Thrace) in 340 BC. At the same time, Greek fortifications began to feature high towers with shuttered windows in the top, which could have been used to house anti-personnel arrow shooters, as in Aigosthena. Projectiles included both arrows and (later) stones that were sometimes lit on fire. Onomarchus of Phocis first used catapults on the battlefield against Philip II of Macedon. Philip's son, Alexander the Great, was the next commander in recorded history to make such use of catapults on the battlefield as well as to use them during sieges.
The Romans started to use catapults as arms for their wars against Syracuse, Macedon, Sparta and Aetolia (3rd and 2nd centuries BC). The Roman machine known as an arcuballista was similar to a large crossbow. Later the Romans used ballista catapults on their warships.
In chronological order:
Castles and fortified walled cities were common during this period and catapults were used as siege weapons against them. As well as their use in attempts to breach walls, incendiary missiles, or diseased carcasses or garbage could be catapulted over the walls.
Defensive techniques in the Middle Ages progressed to a point that rendered catapults largely ineffective. The Viking siege of Paris (885–6 A.D.) "saw the employment by both sides of virtually every instrument of siege craft known to the classical world, including a variety of catapults", to little effect, resulting in failure.
The most widely used catapults throughout the Middle Ages were as follows:
The last large scale military use of catapults was during the trench warfare of World War I. During the early stages of the war, catapults were used to throw hand grenades across no man's land into enemy trenches. They were eventually replaced by small mortars.
The SPBG (Silent Projector of Bottles and Grenades) was a soviet proposal anti-tank weapon that launched grenades from a spring loaded shuttle up to 100 m (330 ft).
In the 1840s, the invention of vulcanized rubber allowed the making of small hand-held catapults, either improvised from Y-shaped sticks or manufactured for sale; both were popular with children and teenagers. These devices were also known as slingshots in the United States.
Special variants called aircraft catapults are used to launch planes from land bases and sea carriers when the takeoff runway is too short for a powered takeoff or simply impractical to extend. Ships also use them to launch torpedoes and deploy bombs against submarines. Small catapults, referred to as "traps", are still widely used to launch clay targets into the air in the sport of clay pigeon shooting.
In the 1990s and early 2000s, a powerful catapult, a trebuchet, was used by thrill-seekers first on private property and in 2001–2002 at Middlemoor Water Park, Somerset, England, to experience being catapulted through the air for 100 feet (30 m). The practice has been discontinued due to a fatality at the Water Park. There had been an injury when the trebuchet was in use on private property. Injury and death occurred when those two participants failed to land onto the safety net. The operators of the trebuchet were tried, but found not guilty of manslaughter, though the jury noted that the fatality might have been avoided had the operators "imposed stricter safety measures." Human cannonball circus acts use a catapult launch mechanism, rather than gunpowder, and are risky ventures for the human cannonballs.
Early launched roller coasters used a catapult system powered by a diesel engine or a dropped weight to acquire their momentum, such as Shuttle Loop installations between 1977 and 1978. The catapult system for roller coasters has been replaced by flywheels and later linear motors.
Pumpkin chunking is another widely popularized use, in which people compete to see who can launch a pumpkin the farthest by mechanical means (although the world record is held by a pneumatic air cannon).
In January 2011, a homemade catapult was discovered that was used to smuggle cannabis into the United States from Mexico. The machine was found 20 ft (6.1 m) from the border fence with 4.4 pounds (2.0 kg) bales of cannabis ready to launch. | [
{
"paragraph_id": 0,
"text": "A catapult is a ballistic device used to launch a projectile a great distance without the aid of gunpowder or other propellants – particularly various types of ancient and medieval siege engines. A catapult uses the sudden release of stored potential energy to propel its payload. Most convert tension or torsion energy that was more slowly and manually built up within the device before release, via springs, bows, twisted rope, elastic, or any of numerous other materials and mechanisms.",
"title": ""
},
{
"paragraph_id": 1,
"text": "In use since ancient times, the catapult has proven to be one of the most persistently effective mechanisms in warfare. In modern times the term can apply to devices ranging from a simple hand-held implement (also called a \"slingshot\") to a mechanism for launching aircraft from a ship.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The earliest catapults date to at least the 7th century BC, with King Uzziah, of Judah, recorded as equipping the walls of Jerusalem with machines that shot \"great stones\". Catapults are mentioned in Yajurveda under the name \"Jyah\" in chapter 30, verse 7. In the 5th century BC the mangonel appeared in ancient China, a type of traction trebuchet and catapult. Early uses were also attributed to Ajatashatru of Magadha in his, 5th century BC, war against the Licchavis. Greek catapults were invented in the early 4th century BC, being attested by Diodorus Siculus as part of the equipment of a Greek army in 399 BC, and subsequently used at the siege of Motya in 397 BC.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The word 'catapult' comes from the Latin 'catapulta', which in turn comes from the Greek Ancient Greek: καταπέλτης (katapeltēs), itself from κατά (kata), \"downwards\" and πάλλω (pallō), \"to toss, to hurl\". Catapults were invented by the ancient Greeks and in ancient India where they were used by the Magadhan Emperor Ajatashatru around the early to mid 5th century BC.",
"title": "Etymology"
},
{
"paragraph_id": 4,
"text": "The catapult and crossbow in Greece are closely intertwined. Primitive catapults were essentially \"the product of relatively straightforward attempts to increase the range and penetrating power of missiles by strengthening the bow which propelled them\". The historian Diodorus Siculus (fl. 1st century BC), described the invention of a mechanical arrow-firing catapult (katapeltikon) by a Greek task force in 399 BC. The weapon was soon after employed against Motya (397 BC), a key Carthaginian stronghold in Sicily. Diodorus is assumed to have drawn his description from the highly rated history of Philistus, a contemporary of the events then. The introduction of crossbows however, can be dated further back: according to the inventor Hero of Alexandria (fl. 1st century AD), who referred to the now lost works of the 3rd-century BC engineer Ctesibius, this weapon was inspired by an earlier foot-held crossbow, called the gastraphetes, which could store more energy than the Greek bows. A detailed description of the gastraphetes, or the \"belly-bow\", along with a watercolor drawing, is found in Heron's technical treatise Belopoeica.",
"title": "Greek and Roman catapults"
},
{
"paragraph_id": 5,
"text": "A third Greek author, Biton (fl. 2nd century BC), whose reliability has been positively reevaluated by recent scholarship, described two advanced forms of the gastraphetes, which he credits to Zopyros, an engineer from southern Italy. Zopyrus has been plausibly equated with a Pythagorean of that name who seems to have flourished in the late 5th century BC. He probably designed his bow-machines on the occasion of the sieges of Cumae and Milet between 421 BC and 401 BC. The bows of these machines already featured a winched pull back system and could apparently throw two missiles at once.",
"title": "Greek and Roman catapults"
},
{
"paragraph_id": 6,
"text": "Philo of Byzantium provides probably the most detailed account on the establishment of a theory of belopoietics (belos = \"projectile\"; poietike = \"(art) of making\") circa 200 BC. The central principle to this theory was that \"all parts of a catapult, including the weight or length of the projectile, were proportional to the size of the torsion springs\". This kind of innovation is indicative of the increasing rate at which geometry and physics were being assimilated into military enterprises.",
"title": "Greek and Roman catapults"
},
{
"paragraph_id": 7,
"text": "From the mid-4th century BC onwards, evidence of the Greek use of arrow-shooting machines becomes more dense and varied: arrow firing machines (katapaltai) are briefly mentioned by Aeneas Tacticus in his treatise on siegecraft written around 350 BC. An extant inscription from the Athenian arsenal, dated between 338 and 326 BC, lists a number of stored catapults with shooting bolts of varying size and springs of sinews. The later entry is particularly noteworthy as it constitutes the first clear evidence for the switch to torsion catapults, which are more powerful than the more-flexible crossbows and which came to dominate Greek and Roman artillery design thereafter. This move to torsion springs was likely spurred by the engineers of Philip II of Macedonia. Another Athenian inventory from 330 to 329 BC includes catapult bolts with heads and flights. As the use of catapults became more commonplace, so did the training required to operate them. Many Greek children were instructed in catapult usage, as evidenced by \"a 3rd Century B.C. inscription from the island of Ceos in the Cyclades [regulating] catapult shooting competitions for the young\". Arrow firing machines in action are reported from Philip II's siege of Perinth (Thrace) in 340 BC. At the same time, Greek fortifications began to feature high towers with shuttered windows in the top, which could have been used to house anti-personnel arrow shooters, as in Aigosthena. Projectiles included both arrows and (later) stones that were sometimes lit on fire. Onomarchus of Phocis first used catapults on the battlefield against Philip II of Macedon. Philip's son, Alexander the Great, was the next commander in recorded history to make such use of catapults on the battlefield as well as to use them during sieges.",
"title": "Greek and Roman catapults"
},
{
"paragraph_id": 8,
"text": "The Romans started to use catapults as arms for their wars against Syracuse, Macedon, Sparta and Aetolia (3rd and 2nd centuries BC). The Roman machine known as an arcuballista was similar to a large crossbow. Later the Romans used ballista catapults on their warships.",
"title": "Greek and Roman catapults"
},
{
"paragraph_id": 9,
"text": "In chronological order:",
"title": "Other ancient catapults"
},
{
"paragraph_id": 10,
"text": "Castles and fortified walled cities were common during this period and catapults were used as siege weapons against them. As well as their use in attempts to breach walls, incendiary missiles, or diseased carcasses or garbage could be catapulted over the walls.",
"title": "Medieval catapults"
},
{
"paragraph_id": 11,
"text": "Defensive techniques in the Middle Ages progressed to a point that rendered catapults largely ineffective. The Viking siege of Paris (885–6 A.D.) \"saw the employment by both sides of virtually every instrument of siege craft known to the classical world, including a variety of catapults\", to little effect, resulting in failure.",
"title": "Medieval catapults"
},
{
"paragraph_id": 12,
"text": "The most widely used catapults throughout the Middle Ages were as follows:",
"title": "Medieval catapults"
},
{
"paragraph_id": 13,
"text": "The last large scale military use of catapults was during the trench warfare of World War I. During the early stages of the war, catapults were used to throw hand grenades across no man's land into enemy trenches. They were eventually replaced by small mortars.",
"title": "Modern use"
},
{
"paragraph_id": 14,
"text": "The SPBG (Silent Projector of Bottles and Grenades) was a soviet proposal anti-tank weapon that launched grenades from a spring loaded shuttle up to 100 m (330 ft).",
"title": "Modern use"
},
{
"paragraph_id": 15,
"text": "In the 1840s, the invention of vulcanized rubber allowed the making of small hand-held catapults, either improvised from Y-shaped sticks or manufactured for sale; both were popular with children and teenagers. These devices were also known as slingshots in the United States.",
"title": "Modern use"
},
{
"paragraph_id": 16,
"text": "Special variants called aircraft catapults are used to launch planes from land bases and sea carriers when the takeoff runway is too short for a powered takeoff or simply impractical to extend. Ships also use them to launch torpedoes and deploy bombs against submarines. Small catapults, referred to as \"traps\", are still widely used to launch clay targets into the air in the sport of clay pigeon shooting.",
"title": "Modern use"
},
{
"paragraph_id": 17,
"text": "In the 1990s and early 2000s, a powerful catapult, a trebuchet, was used by thrill-seekers first on private property and in 2001–2002 at Middlemoor Water Park, Somerset, England, to experience being catapulted through the air for 100 feet (30 m). The practice has been discontinued due to a fatality at the Water Park. There had been an injury when the trebuchet was in use on private property. Injury and death occurred when those two participants failed to land onto the safety net. The operators of the trebuchet were tried, but found not guilty of manslaughter, though the jury noted that the fatality might have been avoided had the operators \"imposed stricter safety measures.\" Human cannonball circus acts use a catapult launch mechanism, rather than gunpowder, and are risky ventures for the human cannonballs.",
"title": "Modern use"
},
{
"paragraph_id": 18,
"text": "Early launched roller coasters used a catapult system powered by a diesel engine or a dropped weight to acquire their momentum, such as Shuttle Loop installations between 1977 and 1978. The catapult system for roller coasters has been replaced by flywheels and later linear motors.",
"title": "Modern use"
},
{
"paragraph_id": 19,
"text": "Pumpkin chunking is another widely popularized use, in which people compete to see who can launch a pumpkin the farthest by mechanical means (although the world record is held by a pneumatic air cannon).",
"title": "Modern use"
},
{
"paragraph_id": 20,
"text": "In January 2011, a homemade catapult was discovered that was used to smuggle cannabis into the United States from Mexico. The machine was found 20 ft (6.1 m) from the border fence with 4.4 pounds (2.0 kg) bales of cannabis ready to launch.",
"title": "Modern use"
}
] | A catapult is a ballistic device used to launch a projectile a great distance without the aid of gunpowder or other propellants – particularly various types of ancient and medieval siege engines. A catapult uses the sudden release of stored potential energy to propel its payload. Most convert tension or torsion energy that was more slowly and manually built up within the device before release, via springs, bows, twisted rope, elastic, or any of numerous other materials and mechanisms. In use since ancient times, the catapult has proven to be one of the most persistently effective mechanisms in warfare. In modern times the term can apply to devices ranging from a simple hand-held implement to a mechanism for launching aircraft from a ship. The earliest catapults date to at least the 7th century BC, with King Uzziah, of Judah, recorded as equipping the walls of Jerusalem with machines that shot "great stones". Catapults are mentioned in Yajurveda under the name "Jyah" in chapter 30, verse 7. In the 5th century BC the mangonel appeared in ancient China, a type of traction trebuchet and catapult. Early uses were also attributed to Ajatashatru of Magadha in his, 5th century BC, war against the Licchavis. Greek catapults were invented in the early 4th century BC, being attested by Diodorus Siculus as part of the equipment of a Greek army in 399 BC, and subsequently used at the siege of Motya in 397 BC. | 2001-11-10T18:22:38Z | 2023-09-04T02:47:00Z | [
"Template:Cite book",
"Template:ISBN",
"Template:Short description",
"Template:Lang-grc",
"Template:Bibleverse-lb",
"Template:Webarchive",
"Template:Commons category",
"Template:Cite web",
"Template:Harvnb",
"Template:About",
"Template:Citation needed",
"Template:Page needed",
"Template:Cite magazine",
"Template:Other uses",
"Template:Sfn",
"Template:Div col",
"Template:Ancient mechanical artillery and hand-held missile weapons",
"Template:Reflist",
"Template:Citation",
"Template:Cite journal",
"Template:Efn",
"Template:Clarify",
"Template:Dubious",
"Template:Div col end",
"Template:Wiktionary",
"Template:Medieval mechanical artillery and hand-held missile weapons",
"Template:Authority control",
"Template:Main article",
"Template:Convert",
"Template:Cite EB1911",
"Template:Cite news",
"Template:Pp-vandalism",
"Template:Notelist"
] | https://en.wikipedia.org/wiki/Catapult |
7,066 | Cinquain | Cinquain /ˈsɪŋkeɪn/ is a class of poetic forms that employ a 5-line pattern. Earlier used to describe any five-line form, it now refers to one of several forms that are defined by specific rules and guidelines.
The modern form, known as American cinquain inspired by Japanese haiku and tanka, is akin in spirit to that of the Imagists. In her 1915 collection titled Verse, published a year after her death, Adelaide Crapsey included 28 cinquains. Crapsey's American Cinquain form developed in two stages. The first, fundamental form is a stanza of five lines of accentual verse, in which the lines comprise, in order, 1, 2, 3, 4, and 1 stresses. Then Crapsey decided to make the criterion a stanza of five lines of accentual-syllabic verse, in which the lines comprise, in order, 1, 2, 3, 4, and 1 stresses and 2, 4, 6, 8, and 2 syllables. Iambic feet were meant to be the standard for the cinquain, which made the dual criteria match perfectly. Some resource materials define classic cinquains as solely iambic, but that is not necessarily so. In contrast to the Eastern forms upon which she based them, Crapsey always titled her cinquains, effectively utilizing the title as a sixth line. Crapsey's cinquain depends on strict structure and intense physical imagery to communicate a mood or feeling.
The form is illustrated by Crapsey's "November Night":
Listen... With faint dry sound, Like steps of passing ghosts, The leaves, frost-crisp'd, break from the trees And fall.
The Scottish poet William Soutar also wrote over one hundred American cinquains (he labelled them "epigrams") between 1933 and 1940.
The Crapsey cinquain has subsequently seen a number of variations by modern poets, including:
The didactic cinquain is closely related to the Crapsey cinquain. It is an informal cinquain widely taught in elementary schools and has been featured in, and popularized by, children's media resources, including Junie B. Jones and PBS Kids. This form is also embraced by young adults and older poets for its expressive simplicity. The prescriptions of this type of cinquain refer to word count, not syllables and stresses. Ordinarily, the first line is a one-word title, the subject of the poem; the second line is a pair of adjectives describing that title; the third line is a three-word phrase that gives more information about the subject (often a list of three gerunds); the fourth line consists of four words describing feelings related to that subject; and the fifth line is a single word synonym or other reference for the subject from line one. For example:
Snow Silent, white Dancing, falling, drifting Covering everything it touches Blanket | [
{
"paragraph_id": 0,
"text": "Cinquain /ˈsɪŋkeɪn/ is a class of poetic forms that employ a 5-line pattern. Earlier used to describe any five-line form, it now refers to one of several forms that are defined by specific rules and guidelines.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The modern form, known as American cinquain inspired by Japanese haiku and tanka, is akin in spirit to that of the Imagists. In her 1915 collection titled Verse, published a year after her death, Adelaide Crapsey included 28 cinquains. Crapsey's American Cinquain form developed in two stages. The first, fundamental form is a stanza of five lines of accentual verse, in which the lines comprise, in order, 1, 2, 3, 4, and 1 stresses. Then Crapsey decided to make the criterion a stanza of five lines of accentual-syllabic verse, in which the lines comprise, in order, 1, 2, 3, 4, and 1 stresses and 2, 4, 6, 8, and 2 syllables. Iambic feet were meant to be the standard for the cinquain, which made the dual criteria match perfectly. Some resource materials define classic cinquains as solely iambic, but that is not necessarily so. In contrast to the Eastern forms upon which she based them, Crapsey always titled her cinquains, effectively utilizing the title as a sixth line. Crapsey's cinquain depends on strict structure and intense physical imagery to communicate a mood or feeling.",
"title": "American cinquain"
},
{
"paragraph_id": 2,
"text": "The form is illustrated by Crapsey's \"November Night\":",
"title": "American cinquain"
},
{
"paragraph_id": 3,
"text": "Listen... With faint dry sound, Like steps of passing ghosts, The leaves, frost-crisp'd, break from the trees And fall.",
"title": "American cinquain"
},
{
"paragraph_id": 4,
"text": "The Scottish poet William Soutar also wrote over one hundred American cinquains (he labelled them \"epigrams\") between 1933 and 1940.",
"title": "American cinquain"
},
{
"paragraph_id": 5,
"text": "The Crapsey cinquain has subsequently seen a number of variations by modern poets, including:",
"title": "Cinquain variations"
},
{
"paragraph_id": 6,
"text": "The didactic cinquain is closely related to the Crapsey cinquain. It is an informal cinquain widely taught in elementary schools and has been featured in, and popularized by, children's media resources, including Junie B. Jones and PBS Kids. This form is also embraced by young adults and older poets for its expressive simplicity. The prescriptions of this type of cinquain refer to word count, not syllables and stresses. Ordinarily, the first line is a one-word title, the subject of the poem; the second line is a pair of adjectives describing that title; the third line is a three-word phrase that gives more information about the subject (often a list of three gerunds); the fourth line consists of four words describing feelings related to that subject; and the fifth line is a single word synonym or other reference for the subject from line one. For example:",
"title": "Didactic cinquain"
},
{
"paragraph_id": 7,
"text": "Snow Silent, white Dancing, falling, drifting Covering everything it touches Blanket",
"title": "Didactic cinquain"
}
] | Cinquain is a class of poetic forms that employ a 5-line pattern. Earlier used to describe any five-line form, it now refers to one of several forms that are defined by specific rules and guidelines. | 2002-02-25T15:43:11Z | 2023-09-15T07:32:38Z | [
"Template:Poetic forms",
"Template:Authority control",
"Template:Short description",
"Template:Unreferenced section",
"Template:Reflist",
"Template:Cite magazine",
"Template:Cite web",
"Template:Cite news",
"Template:IPAc-en",
"Template:Cite journal",
"Template:Cite book"
] | https://en.wikipedia.org/wiki/Cinquain |
7,067 | Cook Islands | The Cook Islands is a self-governing island country in the South Pacific Ocean in free association with New Zealand. It comprises 15 islands whose total land area is 236.7 square kilometres (91 sq mi). The Cook Islands' Exclusive Economic Zone (EEZ) covers 1,960,027 square kilometres (756,771 sq mi) of ocean.
The Cook Islands is in free association with New Zealand. Since 2001, the Cook Islands has directed its own foreign and defence policy, though it has no armed forces and therefore relies on New Zealand for its defence. In recent decades, the Cook Islands have adopted an increasingly assertive foreign policy, and a Cook Islander, Henry Puna, currently serves as Secretary General of the Pacific Islands Forum. Most Cook Islanders are also citizens of New Zealand, but they also have the status of Cook Islands nationals, which is not given to other New Zealand citizens. The Cook Islands have been an active member of the Pacific Community since 1980.
The Cook Islands' main population centres are on the island of Rarotonga (10,863 in 2021),. The Rarotonga International Airport, the main international gateway to the country, is located on this island.
The census of 2021 put the total population at 14,987. There is also a larger population of Cook Islanders in New Zealand and Australia: in the 2018 New Zealand census, 80,532 people said they were Cook Islanders, or of Cook Islands descent. The last Australian census recorded 28,000 Cook Islanders living in Australia, many with Australian citizenship.
With over 168,000 visitors to the islands in 2018, tourism is the country's main industry, and the leading element of the economy, ahead of offshore banking, pearls, and marine and fruit exports.
The Cook Islands comprise 15 islands split between two island groups, which have been called individual names in indigenous languages including Cook Islands Māori and Pukapukan throughout the time they have been inhabited. The first name given by Europeans was Gente Hermosa (beautiful people) by Spanish explorers to Rakahanga in 1606.
The islands as a whole are named after British Captain James Cook, who visited during the 1770s and named Manuae "Hervey Island" after Augustus Hervey, 3rd Earl of Bristol. The southern island group became known as the "Hervey Islands" after this. In the 1820s, Russian Admiral Adam Johann von Krusenstern referred to the southern islands as the "Cook Islands" in his Atlas de l'Ocean Pacifique. The entire territory (including the northern island group) was not known as the "Cook Islands" until after its annexation by New Zealand in the early 20th century. In 1901, the New Zealand parliament passed the Cook and other Islands Government Act, demonstrating that the name "Cook Islands" only referred to some of the islands. However, this situation had changed by the passage of the Cook Islands Act 1915, which defined the Cooks' area and included all presently included islands.
The islands' official name in Cook Islands Māori is Kūki 'Āirani, a transliteration of the English name.
The Cook Islands were first settled around AD 1000 by Polynesian people who are thought to have migrated from Tahiti, an island 1,154 kilometres (717 mi) to the northeast of the main island of Rarotonga.
The first European contact with the islands took place in 1595 when the Spanish navigator Álvaro de Mendaña de Neira sighted the island of Pukapuka, which he named San Bernardo (Saint Bernard). Pedro Fernandes de Queirós, a Portuguese captain at the service of the Spanish Crown, made the first European landing in the islands when he set foot on Rakahanga in 1606, calling the island Gente Hermosa (Beautiful People).
The British navigator Captain James Cook arrived in 1773 and again in 1777 giving the island of Manuae the name Hervey Island. The Hervey Islands later came to be applied to the entire southern group. The name "Cook Islands", in honour of Cook, first appeared on a Russian naval chart published by Adam Johann von Krusenstern in the 1820s.
In 1813 John Williams, a missionary on the colonial brig Endeavour (not the same ship as Cook's) made the first recorded European sighting of Rarotonga. The first recorded landing on Rarotonga by Europeans was in 1814 by the Cumberland; trouble broke out between the sailors and the Islanders and many were killed on both sides. The islands saw no more Europeans until English missionaries arrived in 1821. Christianity quickly took hold in the culture and many islanders are Christians today.
The islands were a popular stop in the 19th century for whaling ships from the United States, Britain and Australia. They visited, from at least 1826, to obtain water, food, and firewood. Their favourite islands were Rarotonga, Aitutaki, Mangaia and Penrhyn.
The Cook Islands became aligned to the United Kingdom in 1890, largely because of the fear of British residents that France might occupy the islands as it already had Tahiti. On 6 September 1900, the islanders' leaders presented a petition asking that the islands (including Niue "if possible") should be annexed as British territory. On 8 and 9 October 1900, seven instruments of cession of Rarotonga and other islands were signed by their chiefs and people. A British Proclamation was issued, stating that the cessions were accepted and the islands declared parts of Her Britannic Majesty's dominions. However, it did not include Aitutaki. Even though the inhabitants regarded themselves as British subjects, the Crown's title was unclear until the island was formally annexed by that Proclamation. In 1901 the islands were included within the boundaries of the Colony of New Zealand by Order in Council under the Colonial Boundaries Act, 1895 of the United Kingdom. The boundary change became effective on 11 June 1901, and the Cook Islands have had a formal relationship with New Zealand since that time.
The Cook Islands responded to the call for service when World War I began, immediately sending five contingents, close to 500 men, to the war. The island's young men volunteered at the outbreak of the war to reinforce the Māori Contingents and the Australian and New Zealand Mounted Rifles. A Patriotic Fund was set up very quickly, raising funds to support the war effort. The Cook Islanders were trained at Narrow Neck Camp in Devonport, and the first recruits departed on 13 October 1915 on the SS Te Anau. The ship arrived in Egypt just as the New Zealand units were about to be transferred to the Western Front. In September, 1916, the Pioneer Battalion, a combination of Cook Islanders, Māori and Pakeha soldiers, saw heavy action in the Allied attack on Flers, the first battle of the Somme. Three Cook Islanders from this first contingent died from enemy action and at least ten died of disease as they struggled to adapt to the conditions in Europe. The 2nd and 3rd Cook Island Contingents were part of the Sinai-Palestine campaign, first in a logistical role for the Australian and New Zealand Mounted Rifles at their Moascar base and later in ammunition supply for the Royal Artillery. After the war, the men returned to the outbreak of the influenza epidemic in New Zealand, and this, along with European diseases meant that a large number did not survive and died in New Zealand or on their return home over the coming years.
When the British Nationality and New Zealand Citizenship Act 1948 came into effect on 1 January 1949, Cook Islanders who were British subjects automatically gained New Zealand citizenship. The islands remained a New Zealand dependent territory until the New Zealand Government decided to grant them self-governing status. On 4 August 1965, a constitution was promulgated. The first Monday in August is celebrated each year as Constitution Day. Albert Henry of the Cook Islands Party was elected as the first Premier and was knighted by Queen Elizabeth II. Henry led the nation until 1978, when he was accused of vote-rigging and resigned. He was stripped of his knighthood in 1979. He was succeeded by Tom Davis of the Democratic Party who held that position until March 1983.
On 13 July 2017, the Cook Islands established Marae Moana, making it become the world's largest protected area by size.
In March 2019, it was reported that the Cook Islands had plans to change its name and remove the reference to Captain James Cook in favour of "a title that reflects its 'Polynesian nature'". It was later reported in May 2019 that the proposed name change had been poorly received by the Cook Islands diaspora. As a compromise, it was decided that the English name of the islands would not be altered, but that a new Cook Islands Māori name would be adopted to replace the current name, a transliteration from English. Discussions over the name continued in 2020.
On September 25, 2023, the United States recognized Cook Islands sovereignty and established diplomatic relations.
The Cook Islands are in the South Pacific Ocean, north-east of New Zealand, between American Samoa and French Polynesia. There are 15 major islands spread over 2,200,000 km (850,000 sq mi) of ocean, divided into two distinct groups: the Southern Cook Islands and the Northern Cook Islands of coral atolls.
The islands were formed by volcanic activity; the northern group is older and consists of six atolls, which are sunken volcanoes topped by coral growth. The climate is moderate to tropical. The Cook Islands consist of 15 islands and two reefs. From March to December, the Cook Islands are in the path of tropical cyclones, the most notable of which were the cyclones Martin and Percy. Two terrestrial ecoregions lie within the islands' territory: the Central Polynesian tropical moist forests and the Cook Islands tropical moist forests.
Note: The table is ordered from north to south. Population figures from the 2021 census.
The Cook Islands are a representative democracy with a parliamentary system in an associated state relationship with New Zealand. Executive power is exercised by the government, with the Prime Minister as head of government. Legislative power is vested in both the government and the Parliament of the Cook Islands. While the country is de jure unicameral, there are two legislative bodies with the House of Ariki acting as a de facto upper house.
There is a multi-party system. The Judiciary is independent of the executive and the legislature. The head of state is the King of New Zealand, who is represented in the Cook Islands by the King's Representative.
The islands are self-governing in "free association" with New Zealand. Under the Cook Islands constitution, New Zealand cannot pass laws for the Cook Islands. Rarotonga has its own foreign service and diplomatic network. Cook Islands nationals have the right to become citizens of New Zealand and can receive New Zealand government services when in New Zealand, but the reverse is not true; New Zealand citizens are not Cook Islands nationals. Despite this, as of 2018, the Cook Islands had diplomatic relations in its own name with 52 other countries. The Cook Islands is not a United Nations member state, but, along with Niue, has had their "full treaty-making capacity" recognised by the United Nations Secretariat, and is a full member of the World Health Organization (WHO), UNESCO, the International Civil Aviation Organization, the International Maritime Organization and the UN Food and Agriculture Organization, all UN specialized agencies, and is an associate member of the United Nations Economic and Social Commission for Asia and the Pacific (UNESCAP) and a Member of the Assembly of States of the International Criminal Court.
The Cook Islands Ambassador to the International Maritime Organisation, Captain Ian Finley, has faced controversy for accepting $700,000 of undisclosed funding from a shipping industry lobby group run with his wife, at the same time as he was helping design environmental rules for the shipping industry.
Despite being one of the most vulnerable countries to climate change, the Cook Islands paradoxically opposed any measures to reduce greenhouse gas emissions from the shipping industry, according to Climate Home.
On 11 June 1980, the United States signed a treaty with the Cook Islands specifying the maritime border between the Cook Islands and American Samoa and also relinquishing any American claims to Penrhyn, Pukapuka, Manihiki, and Rakahanga. In 1990 the Cook Islands and France signed a treaty that delimited the boundary between the Cook Islands and French Polynesia. In late August 2012, United States Secretary of State Hillary Clinton visited the islands. In 2017, the Cook Islands signed the UN Treaty on the Prohibition of Nuclear Weapons. On 25 September 2023, the Cook Islands and the United States of America established diplomatic relations under the leadership of Prime Minister Mark Brown at a ceremony in Washington, DC.
Defence is the responsibility of New Zealand, in consultation with the Cook Islands and at its request. The New Zealand Defence Force has responsibilities for protecting the territory as well as its offshore Exclusive Economic Zone (EEZ). The total offshore EEZ is about 2 million square kilometers. Vessels of the Royal New Zealand Navy can be employed for this task including its Protector-class offshore patrol vessels. These naval forces may also be supported by Royal New Zealand Air Force aircraft, including P-8 Poseidons.
However, these forces are limited in size and in 2023 were described by the Government as "not in a fit state" to respond to regional challenges. New Zealand's subsequently announced "Defence Policy and Strategy Statement" noted that shaping the security environment, "focusing in particular on supporting security in and for the Pacific" would receive enhanced attention.
The Cook Islands Police Service is the police force of the Cook Islands. The Maritime Wing of the Police Service assists in exercising sovereignty over the nation's EEZ. Vessels have included a Pacific-class patrol boat, CIPPB Te Kukupa commissioned in May 1989 which received a re-fit in 2015 but was withdrawn from service and replaced by a larger and more capable Guardian-class patrol boat, CIPPB Te Kukupa II, which entered service in 2022.
Formerly, male homosexuality was de jure illegal in the Cook Islands and was punishable by a maximum term of seven years imprisonment; however, the law was never enforced. In 2023, legislation was passed which legalised homosexuality.
There are island councils on all of the inhabited outer islands (Outer Islands Local Government Act 1987 with amendments up to 2004, and Palmerston Island Local Government Act 1993) except Nassau, which is governed by Pukapuka (Suwarrow, with only one caretaker living on the island, also governed by Pukapuka, is not counted with the inhabited islands in this context). Each council is headed by a mayor.
The three Vaka councils of Rarotonga established in 1997 (Rarotonga Local Government Act 1997), also headed by mayors, were abolished in February 2008, despite much controversy.
On the lowest level, there are village committees. Nassau, which is governed by Pukapuka, has an island committee (Nassau Island Committee), which advises the Pukapuka Island Council on matters concerning its own island.
Births and deaths
In the Cook Islands, the Church is separate from the state, and most of the population is Christian. The religious distribution is as follows:
The various Protestant groups account for 62.8% of the believers, the most followed denomination being the Cook Islands Christian Church with 49.1%. Other Protestant Christian groups include Seventh-day Adventist 7.9%, Assemblies of God 3.7% and Apostolic Church 2.1%. The main non-Protestant group are Catholics with 17% of the population. The Church of Jesus Christ of Latter-day Saints makes up 4.4%.
The economy is strongly affected by geography. It is isolated from foreign markets, and has some inadequate infrastructure; it lacks major natural resources, has limited manufacturing and suffers moderately from natural disasters. Tourism provides the economic base that makes up approximately 67.5% of GDP. Additionally, the economy is supported by foreign aid, largely from New Zealand. China has also contributed foreign aid, which has resulted in, among other projects, the Police Headquarters building. The Cook Islands is expanding its agriculture, mining and fishing sectors, with varying success.
Since approximately 1989, the Cook Islands have become a location specialising in so-called asset protection trusts, by which investors shelter assets from the reach of creditors and legal authorities. According to The New York Times, the Cooks have "laws devised to protect foreigners' assets from legal claims in their home countries", which were apparently crafted specifically to thwart the long arm of American justice; creditors must travel to the Cook Islands and argue their cases under Cooks law, often at prohibitive expense. Unlike other foreign jurisdictions such as the British Virgin Islands, the Cayman Islands and Switzerland, the Cooks "generally disregard foreign court orders" and do not require that bank accounts, real estate, or other assets protected from scrutiny (it is illegal to disclose names or any information about Cooks trusts) be physically located within the archipelago. Taxes on trusts and trust employees account for some 8% of the Cook Islands economy, behind tourism but ahead of fishing.
In recent years, the Cook Islands has gained a reputation as a debtor paradise, through the enactment of legislation that permits debtors to shield their property from the claims of creditors.
Since 2008 the Executive Director of Cook Islands Bank has been Vaine Nooana-Arioka.
There are eleven airports in the Cook Islands, including one with a paved runway, Rarotonga International Airport, served by four passenger airlines.
Newspapers in the Cook Islands are usually published in English with some articles in Cook Islands Māori. The Cook Islands News has been published since 1945, although it was owned by the government until 1989. Former newspapers include Te Akatauira, which was published from 1978 to 1980.
The languages of the Cook Islands include English, Cook Islands Māori (or "Rarotongan"), and Pukapukan. Dialects of Cook Islands Māori include Penrhyn; Rakahanga-Manihiki; the Ngaputoru dialect of Atiu, Mitiaro, and Mauke; the Aitutaki dialect; and the Mangaian dialect. Cook Islands Māori and its dialectic variants are closely related to both Tahitian and to New Zealand Māori. Pukapukan is considered closely related to the Samoan language. English and Cook Islands Māori are official languages of the Cook Islands; per the Te Reo Maori Act. The legal definition of Cook Islands Māori includes Pukapukan.
Music in the Cook Islands is varied, with Christian songs being quite popular, but traditional dancing and songs in Polynesian languages remain popular.
Woodcarving is a common art form in the Cook Islands. The proximity of islands in the southern group helped produce a homogeneous style of carving but that had special developments in each island. Rarotonga is known for its fisherman's gods and staff-gods, Atiu for its wooden seats, Mitiaro, Mauke and Atiu for mace and slab gods and Mangaia for its ceremonial adzes. Most of the original wood carvings were either spirited away by early European collectors or were burned in large numbers by missionaries. Today, carving is no longer the major art form with the same spiritual and cultural emphasis given to it by the Maori in New Zealand. However, there are continual efforts to interest young people in their heritage and some good work is being turned out under the guidance of older carvers. Atiu, in particular, has a strong tradition of crafts both in carving and local fibre arts such as tapa. Mangaia is the source of many fine adzes carved in a distinctive, idiosyncratic style with the so-called double-k design. Mangaia also produces food pounders carved from the heavy calcite found in its extensive limestone caves.
The outer islands produce traditional weaving of mats, basketware and hats. Particularly fine examples of rito hats are worn by women to church. They are made from the uncurled immature fibre of the coconut palm and are of very high quality. The Polynesian equivalent of Panama hats, they are highly valued and are keenly sought by Polynesian visitors from Tahiti. Often, they are decorated with hatbands made of minuscule pupu shells that are painted and stitched on by hand. Although pupu are found on other islands the collection and use of them in decorative work has become a speciality of Mangaia. The weaving of rito is a speciality of the northern islands, Manihiki, Rakahanga and Penrhyn.
A major art form in the Cook Islands is tivaevae. This is, in essence, the art of handmade Island scenery patchwork quilts. Introduced by the wives of missionaries in the 19th century, the craft grew into a communal activity, which is probably one of the main reasons for its popularity.
The Cook Islands has produced internationally recognised contemporary artists, especially in the main island of Rarotonga. Artists include painter (and photographer) Mahiriki Tangaroa, sculptors Eruera (Ted) Nia (originally a film maker) and master carver Mike Tavioni, painter (and Polynesian tattoo enthusiast) Upoko'ina Ian George, Aitutakian-born painter Tim Manavaroa Buchanan, Loretta Reynolds, Judith Kunzlé, Joan Gragg, Kay George (who is also known for her fabric designs), Apii Rongo, Varu Samuel, and multi-media, installation and community-project artist Ani O'Neill, all of whom currently live on the main island of Rarotonga. Atiuan-based Andrea Eimke is an artist who works in the medium of tapa and other textiles, and also co-authored the book 'Tivaivai – The Social Fabric of the Cook Islands' with British academic Susanne Kuechler. Many of these artists have studied at university art schools in New Zealand and continue to enjoy close links with the New Zealand art scene.
New Zealand-based Cook Islander artists include Michel Tuffery, print-maker David Teata, Richard Shortland Cooper, Nina Oberg Humphries, Sylvia Marsters and Jim Vivieaere.
Bergman Gallery (formerly BCA Gallery) is the main commercial dealer gallery in the Cook Islands, situated in the main island of Rarotonga, and represents Cook Islands artists such as Sylvia Marsters, Mahiriki Tangaroa, Nina Oberg Humphries, Joan Gragg and Tungane Broadbent The Art Studio Gallery in Arorangi, was run by Ian George and Kay George is now Beluga Cafe. There is also Gallery Tavioni and Vananga run by Mike Tavioni and The Cook Islands National Museum also exhibits art.
Rugby league is the most popular sport in the Cook Islands.
21°14′S 159°46′W / 21.233°S 159.767°W / -21.233; -159.767 | [
{
"paragraph_id": 0,
"text": "The Cook Islands is a self-governing island country in the South Pacific Ocean in free association with New Zealand. It comprises 15 islands whose total land area is 236.7 square kilometres (91 sq mi). The Cook Islands' Exclusive Economic Zone (EEZ) covers 1,960,027 square kilometres (756,771 sq mi) of ocean.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The Cook Islands is in free association with New Zealand. Since 2001, the Cook Islands has directed its own foreign and defence policy, though it has no armed forces and therefore relies on New Zealand for its defence. In recent decades, the Cook Islands have adopted an increasingly assertive foreign policy, and a Cook Islander, Henry Puna, currently serves as Secretary General of the Pacific Islands Forum. Most Cook Islanders are also citizens of New Zealand, but they also have the status of Cook Islands nationals, which is not given to other New Zealand citizens. The Cook Islands have been an active member of the Pacific Community since 1980.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The Cook Islands' main population centres are on the island of Rarotonga (10,863 in 2021),. The Rarotonga International Airport, the main international gateway to the country, is located on this island.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The census of 2021 put the total population at 14,987. There is also a larger population of Cook Islanders in New Zealand and Australia: in the 2018 New Zealand census, 80,532 people said they were Cook Islanders, or of Cook Islands descent. The last Australian census recorded 28,000 Cook Islanders living in Australia, many with Australian citizenship.",
"title": ""
},
{
"paragraph_id": 4,
"text": "With over 168,000 visitors to the islands in 2018, tourism is the country's main industry, and the leading element of the economy, ahead of offshore banking, pearls, and marine and fruit exports.",
"title": ""
},
{
"paragraph_id": 5,
"text": "The Cook Islands comprise 15 islands split between two island groups, which have been called individual names in indigenous languages including Cook Islands Māori and Pukapukan throughout the time they have been inhabited. The first name given by Europeans was Gente Hermosa (beautiful people) by Spanish explorers to Rakahanga in 1606.",
"title": "Etymology"
},
{
"paragraph_id": 6,
"text": "The islands as a whole are named after British Captain James Cook, who visited during the 1770s and named Manuae \"Hervey Island\" after Augustus Hervey, 3rd Earl of Bristol. The southern island group became known as the \"Hervey Islands\" after this. In the 1820s, Russian Admiral Adam Johann von Krusenstern referred to the southern islands as the \"Cook Islands\" in his Atlas de l'Ocean Pacifique. The entire territory (including the northern island group) was not known as the \"Cook Islands\" until after its annexation by New Zealand in the early 20th century. In 1901, the New Zealand parliament passed the Cook and other Islands Government Act, demonstrating that the name \"Cook Islands\" only referred to some of the islands. However, this situation had changed by the passage of the Cook Islands Act 1915, which defined the Cooks' area and included all presently included islands.",
"title": "Etymology"
},
{
"paragraph_id": 7,
"text": "The islands' official name in Cook Islands Māori is Kūki 'Āirani, a transliteration of the English name.",
"title": "Etymology"
},
{
"paragraph_id": 8,
"text": "The Cook Islands were first settled around AD 1000 by Polynesian people who are thought to have migrated from Tahiti, an island 1,154 kilometres (717 mi) to the northeast of the main island of Rarotonga.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "The first European contact with the islands took place in 1595 when the Spanish navigator Álvaro de Mendaña de Neira sighted the island of Pukapuka, which he named San Bernardo (Saint Bernard). Pedro Fernandes de Queirós, a Portuguese captain at the service of the Spanish Crown, made the first European landing in the islands when he set foot on Rakahanga in 1606, calling the island Gente Hermosa (Beautiful People).",
"title": "History"
},
{
"paragraph_id": 10,
"text": "The British navigator Captain James Cook arrived in 1773 and again in 1777 giving the island of Manuae the name Hervey Island. The Hervey Islands later came to be applied to the entire southern group. The name \"Cook Islands\", in honour of Cook, first appeared on a Russian naval chart published by Adam Johann von Krusenstern in the 1820s.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "In 1813 John Williams, a missionary on the colonial brig Endeavour (not the same ship as Cook's) made the first recorded European sighting of Rarotonga. The first recorded landing on Rarotonga by Europeans was in 1814 by the Cumberland; trouble broke out between the sailors and the Islanders and many were killed on both sides. The islands saw no more Europeans until English missionaries arrived in 1821. Christianity quickly took hold in the culture and many islanders are Christians today.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "The islands were a popular stop in the 19th century for whaling ships from the United States, Britain and Australia. They visited, from at least 1826, to obtain water, food, and firewood. Their favourite islands were Rarotonga, Aitutaki, Mangaia and Penrhyn.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "The Cook Islands became aligned to the United Kingdom in 1890, largely because of the fear of British residents that France might occupy the islands as it already had Tahiti. On 6 September 1900, the islanders' leaders presented a petition asking that the islands (including Niue \"if possible\") should be annexed as British territory. On 8 and 9 October 1900, seven instruments of cession of Rarotonga and other islands were signed by their chiefs and people. A British Proclamation was issued, stating that the cessions were accepted and the islands declared parts of Her Britannic Majesty's dominions. However, it did not include Aitutaki. Even though the inhabitants regarded themselves as British subjects, the Crown's title was unclear until the island was formally annexed by that Proclamation. In 1901 the islands were included within the boundaries of the Colony of New Zealand by Order in Council under the Colonial Boundaries Act, 1895 of the United Kingdom. The boundary change became effective on 11 June 1901, and the Cook Islands have had a formal relationship with New Zealand since that time.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "The Cook Islands responded to the call for service when World War I began, immediately sending five contingents, close to 500 men, to the war. The island's young men volunteered at the outbreak of the war to reinforce the Māori Contingents and the Australian and New Zealand Mounted Rifles. A Patriotic Fund was set up very quickly, raising funds to support the war effort. The Cook Islanders were trained at Narrow Neck Camp in Devonport, and the first recruits departed on 13 October 1915 on the SS Te Anau. The ship arrived in Egypt just as the New Zealand units were about to be transferred to the Western Front. In September, 1916, the Pioneer Battalion, a combination of Cook Islanders, Māori and Pakeha soldiers, saw heavy action in the Allied attack on Flers, the first battle of the Somme. Three Cook Islanders from this first contingent died from enemy action and at least ten died of disease as they struggled to adapt to the conditions in Europe. The 2nd and 3rd Cook Island Contingents were part of the Sinai-Palestine campaign, first in a logistical role for the Australian and New Zealand Mounted Rifles at their Moascar base and later in ammunition supply for the Royal Artillery. After the war, the men returned to the outbreak of the influenza epidemic in New Zealand, and this, along with European diseases meant that a large number did not survive and died in New Zealand or on their return home over the coming years.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "When the British Nationality and New Zealand Citizenship Act 1948 came into effect on 1 January 1949, Cook Islanders who were British subjects automatically gained New Zealand citizenship. The islands remained a New Zealand dependent territory until the New Zealand Government decided to grant them self-governing status. On 4 August 1965, a constitution was promulgated. The first Monday in August is celebrated each year as Constitution Day. Albert Henry of the Cook Islands Party was elected as the first Premier and was knighted by Queen Elizabeth II. Henry led the nation until 1978, when he was accused of vote-rigging and resigned. He was stripped of his knighthood in 1979. He was succeeded by Tom Davis of the Democratic Party who held that position until March 1983.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "On 13 July 2017, the Cook Islands established Marae Moana, making it become the world's largest protected area by size.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "In March 2019, it was reported that the Cook Islands had plans to change its name and remove the reference to Captain James Cook in favour of \"a title that reflects its 'Polynesian nature'\". It was later reported in May 2019 that the proposed name change had been poorly received by the Cook Islands diaspora. As a compromise, it was decided that the English name of the islands would not be altered, but that a new Cook Islands Māori name would be adopted to replace the current name, a transliteration from English. Discussions over the name continued in 2020.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "On September 25, 2023, the United States recognized Cook Islands sovereignty and established diplomatic relations.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "The Cook Islands are in the South Pacific Ocean, north-east of New Zealand, between American Samoa and French Polynesia. There are 15 major islands spread over 2,200,000 km (850,000 sq mi) of ocean, divided into two distinct groups: the Southern Cook Islands and the Northern Cook Islands of coral atolls.",
"title": "Geography"
},
{
"paragraph_id": 20,
"text": "The islands were formed by volcanic activity; the northern group is older and consists of six atolls, which are sunken volcanoes topped by coral growth. The climate is moderate to tropical. The Cook Islands consist of 15 islands and two reefs. From March to December, the Cook Islands are in the path of tropical cyclones, the most notable of which were the cyclones Martin and Percy. Two terrestrial ecoregions lie within the islands' territory: the Central Polynesian tropical moist forests and the Cook Islands tropical moist forests.",
"title": "Geography"
},
{
"paragraph_id": 21,
"text": "Note: The table is ordered from north to south. Population figures from the 2021 census.",
"title": "Geography"
},
{
"paragraph_id": 22,
"text": "The Cook Islands are a representative democracy with a parliamentary system in an associated state relationship with New Zealand. Executive power is exercised by the government, with the Prime Minister as head of government. Legislative power is vested in both the government and the Parliament of the Cook Islands. While the country is de jure unicameral, there are two legislative bodies with the House of Ariki acting as a de facto upper house.",
"title": "Politics and foreign relations"
},
{
"paragraph_id": 23,
"text": "There is a multi-party system. The Judiciary is independent of the executive and the legislature. The head of state is the King of New Zealand, who is represented in the Cook Islands by the King's Representative.",
"title": "Politics and foreign relations"
},
{
"paragraph_id": 24,
"text": "The islands are self-governing in \"free association\" with New Zealand. Under the Cook Islands constitution, New Zealand cannot pass laws for the Cook Islands. Rarotonga has its own foreign service and diplomatic network. Cook Islands nationals have the right to become citizens of New Zealand and can receive New Zealand government services when in New Zealand, but the reverse is not true; New Zealand citizens are not Cook Islands nationals. Despite this, as of 2018, the Cook Islands had diplomatic relations in its own name with 52 other countries. The Cook Islands is not a United Nations member state, but, along with Niue, has had their \"full treaty-making capacity\" recognised by the United Nations Secretariat, and is a full member of the World Health Organization (WHO), UNESCO, the International Civil Aviation Organization, the International Maritime Organization and the UN Food and Agriculture Organization, all UN specialized agencies, and is an associate member of the United Nations Economic and Social Commission for Asia and the Pacific (UNESCAP) and a Member of the Assembly of States of the International Criminal Court.",
"title": "Politics and foreign relations"
},
{
"paragraph_id": 25,
"text": "The Cook Islands Ambassador to the International Maritime Organisation, Captain Ian Finley, has faced controversy for accepting $700,000 of undisclosed funding from a shipping industry lobby group run with his wife, at the same time as he was helping design environmental rules for the shipping industry.",
"title": "Politics and foreign relations"
},
{
"paragraph_id": 26,
"text": "Despite being one of the most vulnerable countries to climate change, the Cook Islands paradoxically opposed any measures to reduce greenhouse gas emissions from the shipping industry, according to Climate Home.",
"title": "Politics and foreign relations"
},
{
"paragraph_id": 27,
"text": "On 11 June 1980, the United States signed a treaty with the Cook Islands specifying the maritime border between the Cook Islands and American Samoa and also relinquishing any American claims to Penrhyn, Pukapuka, Manihiki, and Rakahanga. In 1990 the Cook Islands and France signed a treaty that delimited the boundary between the Cook Islands and French Polynesia. In late August 2012, United States Secretary of State Hillary Clinton visited the islands. In 2017, the Cook Islands signed the UN Treaty on the Prohibition of Nuclear Weapons. On 25 September 2023, the Cook Islands and the United States of America established diplomatic relations under the leadership of Prime Minister Mark Brown at a ceremony in Washington, DC.",
"title": "Politics and foreign relations"
},
{
"paragraph_id": 28,
"text": "Defence is the responsibility of New Zealand, in consultation with the Cook Islands and at its request. The New Zealand Defence Force has responsibilities for protecting the territory as well as its offshore Exclusive Economic Zone (EEZ). The total offshore EEZ is about 2 million square kilometers. Vessels of the Royal New Zealand Navy can be employed for this task including its Protector-class offshore patrol vessels. These naval forces may also be supported by Royal New Zealand Air Force aircraft, including P-8 Poseidons.",
"title": "Politics and foreign relations"
},
{
"paragraph_id": 29,
"text": "However, these forces are limited in size and in 2023 were described by the Government as \"not in a fit state\" to respond to regional challenges. New Zealand's subsequently announced \"Defence Policy and Strategy Statement\" noted that shaping the security environment, \"focusing in particular on supporting security in and for the Pacific\" would receive enhanced attention.",
"title": "Politics and foreign relations"
},
{
"paragraph_id": 30,
"text": "The Cook Islands Police Service is the police force of the Cook Islands. The Maritime Wing of the Police Service assists in exercising sovereignty over the nation's EEZ. Vessels have included a Pacific-class patrol boat, CIPPB Te Kukupa commissioned in May 1989 which received a re-fit in 2015 but was withdrawn from service and replaced by a larger and more capable Guardian-class patrol boat, CIPPB Te Kukupa II, which entered service in 2022.",
"title": "Politics and foreign relations"
},
{
"paragraph_id": 31,
"text": "Formerly, male homosexuality was de jure illegal in the Cook Islands and was punishable by a maximum term of seven years imprisonment; however, the law was never enforced. In 2023, legislation was passed which legalised homosexuality.",
"title": "Politics and foreign relations"
},
{
"paragraph_id": 32,
"text": "There are island councils on all of the inhabited outer islands (Outer Islands Local Government Act 1987 with amendments up to 2004, and Palmerston Island Local Government Act 1993) except Nassau, which is governed by Pukapuka (Suwarrow, with only one caretaker living on the island, also governed by Pukapuka, is not counted with the inhabited islands in this context). Each council is headed by a mayor.",
"title": "Administrative subdivisions"
},
{
"paragraph_id": 33,
"text": "The three Vaka councils of Rarotonga established in 1997 (Rarotonga Local Government Act 1997), also headed by mayors, were abolished in February 2008, despite much controversy.",
"title": "Administrative subdivisions"
},
{
"paragraph_id": 34,
"text": "On the lowest level, there are village committees. Nassau, which is governed by Pukapuka, has an island committee (Nassau Island Committee), which advises the Pukapuka Island Council on matters concerning its own island.",
"title": "Administrative subdivisions"
},
{
"paragraph_id": 35,
"text": "Births and deaths",
"title": "Demographics"
},
{
"paragraph_id": 36,
"text": "In the Cook Islands, the Church is separate from the state, and most of the population is Christian. The religious distribution is as follows:",
"title": "Demographics"
},
{
"paragraph_id": 37,
"text": "The various Protestant groups account for 62.8% of the believers, the most followed denomination being the Cook Islands Christian Church with 49.1%. Other Protestant Christian groups include Seventh-day Adventist 7.9%, Assemblies of God 3.7% and Apostolic Church 2.1%. The main non-Protestant group are Catholics with 17% of the population. The Church of Jesus Christ of Latter-day Saints makes up 4.4%.",
"title": "Demographics"
},
{
"paragraph_id": 38,
"text": "The economy is strongly affected by geography. It is isolated from foreign markets, and has some inadequate infrastructure; it lacks major natural resources, has limited manufacturing and suffers moderately from natural disasters. Tourism provides the economic base that makes up approximately 67.5% of GDP. Additionally, the economy is supported by foreign aid, largely from New Zealand. China has also contributed foreign aid, which has resulted in, among other projects, the Police Headquarters building. The Cook Islands is expanding its agriculture, mining and fishing sectors, with varying success.",
"title": "Economy"
},
{
"paragraph_id": 39,
"text": "Since approximately 1989, the Cook Islands have become a location specialising in so-called asset protection trusts, by which investors shelter assets from the reach of creditors and legal authorities. According to The New York Times, the Cooks have \"laws devised to protect foreigners' assets from legal claims in their home countries\", which were apparently crafted specifically to thwart the long arm of American justice; creditors must travel to the Cook Islands and argue their cases under Cooks law, often at prohibitive expense. Unlike other foreign jurisdictions such as the British Virgin Islands, the Cayman Islands and Switzerland, the Cooks \"generally disregard foreign court orders\" and do not require that bank accounts, real estate, or other assets protected from scrutiny (it is illegal to disclose names or any information about Cooks trusts) be physically located within the archipelago. Taxes on trusts and trust employees account for some 8% of the Cook Islands economy, behind tourism but ahead of fishing.",
"title": "Economy"
},
{
"paragraph_id": 40,
"text": "In recent years, the Cook Islands has gained a reputation as a debtor paradise, through the enactment of legislation that permits debtors to shield their property from the claims of creditors.",
"title": "Economy"
},
{
"paragraph_id": 41,
"text": "Since 2008 the Executive Director of Cook Islands Bank has been Vaine Nooana-Arioka.",
"title": "Economy"
},
{
"paragraph_id": 42,
"text": "There are eleven airports in the Cook Islands, including one with a paved runway, Rarotonga International Airport, served by four passenger airlines.",
"title": "Economy"
},
{
"paragraph_id": 43,
"text": "Newspapers in the Cook Islands are usually published in English with some articles in Cook Islands Māori. The Cook Islands News has been published since 1945, although it was owned by the government until 1989. Former newspapers include Te Akatauira, which was published from 1978 to 1980.",
"title": "Culture"
},
{
"paragraph_id": 44,
"text": "The languages of the Cook Islands include English, Cook Islands Māori (or \"Rarotongan\"), and Pukapukan. Dialects of Cook Islands Māori include Penrhyn; Rakahanga-Manihiki; the Ngaputoru dialect of Atiu, Mitiaro, and Mauke; the Aitutaki dialect; and the Mangaian dialect. Cook Islands Māori and its dialectic variants are closely related to both Tahitian and to New Zealand Māori. Pukapukan is considered closely related to the Samoan language. English and Cook Islands Māori are official languages of the Cook Islands; per the Te Reo Maori Act. The legal definition of Cook Islands Māori includes Pukapukan.",
"title": "Culture"
},
{
"paragraph_id": 45,
"text": "Music in the Cook Islands is varied, with Christian songs being quite popular, but traditional dancing and songs in Polynesian languages remain popular.",
"title": "Culture"
},
{
"paragraph_id": 46,
"text": "Woodcarving is a common art form in the Cook Islands. The proximity of islands in the southern group helped produce a homogeneous style of carving but that had special developments in each island. Rarotonga is known for its fisherman's gods and staff-gods, Atiu for its wooden seats, Mitiaro, Mauke and Atiu for mace and slab gods and Mangaia for its ceremonial adzes. Most of the original wood carvings were either spirited away by early European collectors or were burned in large numbers by missionaries. Today, carving is no longer the major art form with the same spiritual and cultural emphasis given to it by the Maori in New Zealand. However, there are continual efforts to interest young people in their heritage and some good work is being turned out under the guidance of older carvers. Atiu, in particular, has a strong tradition of crafts both in carving and local fibre arts such as tapa. Mangaia is the source of many fine adzes carved in a distinctive, idiosyncratic style with the so-called double-k design. Mangaia also produces food pounders carved from the heavy calcite found in its extensive limestone caves.",
"title": "Culture"
},
{
"paragraph_id": 47,
"text": "The outer islands produce traditional weaving of mats, basketware and hats. Particularly fine examples of rito hats are worn by women to church. They are made from the uncurled immature fibre of the coconut palm and are of very high quality. The Polynesian equivalent of Panama hats, they are highly valued and are keenly sought by Polynesian visitors from Tahiti. Often, they are decorated with hatbands made of minuscule pupu shells that are painted and stitched on by hand. Although pupu are found on other islands the collection and use of them in decorative work has become a speciality of Mangaia. The weaving of rito is a speciality of the northern islands, Manihiki, Rakahanga and Penrhyn.",
"title": "Culture"
},
{
"paragraph_id": 48,
"text": "A major art form in the Cook Islands is tivaevae. This is, in essence, the art of handmade Island scenery patchwork quilts. Introduced by the wives of missionaries in the 19th century, the craft grew into a communal activity, which is probably one of the main reasons for its popularity.",
"title": "Culture"
},
{
"paragraph_id": 49,
"text": "The Cook Islands has produced internationally recognised contemporary artists, especially in the main island of Rarotonga. Artists include painter (and photographer) Mahiriki Tangaroa, sculptors Eruera (Ted) Nia (originally a film maker) and master carver Mike Tavioni, painter (and Polynesian tattoo enthusiast) Upoko'ina Ian George, Aitutakian-born painter Tim Manavaroa Buchanan, Loretta Reynolds, Judith Kunzlé, Joan Gragg, Kay George (who is also known for her fabric designs), Apii Rongo, Varu Samuel, and multi-media, installation and community-project artist Ani O'Neill, all of whom currently live on the main island of Rarotonga. Atiuan-based Andrea Eimke is an artist who works in the medium of tapa and other textiles, and also co-authored the book 'Tivaivai – The Social Fabric of the Cook Islands' with British academic Susanne Kuechler. Many of these artists have studied at university art schools in New Zealand and continue to enjoy close links with the New Zealand art scene.",
"title": "Culture"
},
{
"paragraph_id": 50,
"text": "New Zealand-based Cook Islander artists include Michel Tuffery, print-maker David Teata, Richard Shortland Cooper, Nina Oberg Humphries, Sylvia Marsters and Jim Vivieaere.",
"title": "Culture"
},
{
"paragraph_id": 51,
"text": "Bergman Gallery (formerly BCA Gallery) is the main commercial dealer gallery in the Cook Islands, situated in the main island of Rarotonga, and represents Cook Islands artists such as Sylvia Marsters, Mahiriki Tangaroa, Nina Oberg Humphries, Joan Gragg and Tungane Broadbent The Art Studio Gallery in Arorangi, was run by Ian George and Kay George is now Beluga Cafe. There is also Gallery Tavioni and Vananga run by Mike Tavioni and The Cook Islands National Museum also exhibits art.",
"title": "Culture"
},
{
"paragraph_id": 52,
"text": "Rugby league is the most popular sport in the Cook Islands.",
"title": "Sport"
},
{
"paragraph_id": 53,
"text": "21°14′S 159°46′W / 21.233°S 159.767°W / -21.233; -159.767",
"title": "External links"
}
] | The Cook Islands is a self-governing island country in the South Pacific Ocean in free association with New Zealand. It comprises 15 islands whose total land area is 236.7 square kilometres (91 sq mi). The Cook Islands' Exclusive Economic Zone (EEZ) covers 1,960,027 square kilometres (756,771 sq mi) of ocean. The Cook Islands is in free association with New Zealand. Since 2001, the Cook Islands has directed its own foreign and defence policy, though it has no armed forces and therefore relies on New Zealand for its defence. In recent decades, the Cook Islands have adopted an increasingly assertive foreign policy, and a Cook Islander, Henry Puna, currently serves as Secretary General of the Pacific Islands Forum. Most Cook Islanders are also citizens of New Zealand, but they also have the status of Cook Islands nationals, which is not given to other New Zealand citizens. The Cook Islands have been an active member of the Pacific Community since 1980. The Cook Islands' main population centres are on the island of Rarotonga,. The Rarotonga International Airport, the main international gateway to the country, is located on this island. The census of 2021 put the total population at 14,987. There is also a larger population of Cook Islanders in New Zealand and Australia: in the 2018 New Zealand census, 80,532 people said they were Cook Islanders, or of Cook Islands descent. The last Australian census recorded 28,000 Cook Islanders living in Australia, many with Australian citizenship. With over 168,000 visitors to the islands in 2018, tourism is the country's main industry, and the leading element of the economy, ahead of offshore banking, pearls, and marine and fruit exports. | 2001-10-31T22:55:43Z | 2023-12-30T17:00:17Z | [
"Template:Convert",
"Template:Main",
"Template:Portal",
"Template:Cite book",
"Template:Subject bar",
"Template:Countries and territories of Oceania",
"Template:Short description",
"Template:As of",
"Template:Cite journal",
"Template:Curlie",
"Template:Pacific Islands Forum (PIF)",
"Template:Use dmy dates",
"Template:Ship",
"Template:Cite CIA World Factbook",
"Template:Efn",
"Template:Cn",
"Template:Cite press release",
"Template:Notelist",
"Template:Cook Islands topics",
"Template:Coord",
"Template:Cite news",
"Template:Webarchive",
"Template:Further",
"Template:Monarch of New Zealand, current",
"Template:See also",
"Template:Cite web",
"Template:Cbignore",
"Template:Navboxes",
"Template:Update",
"Template:Population pyramid",
"Template:Citation",
"Template:Official website",
"Template:Culture of Oceania",
"Template:Hatgrp",
"Template:EngvarB",
"Template:Infobox country",
"Template:Sclass2",
"Template:Reflist",
"Template:ISBN",
"Template:Authority control"
] | https://en.wikipedia.org/wiki/Cook_Islands |
7,068 | History of the Cook Islands | The Cook Islands are named after Captain James Cook, who visited the islands in 1773 and 1777, although Spanish navigator Alvaro de Mendaña was the first European to reach the islands in 1595. The Cook Islands became aligned to the United Kingdom in 1890, largely because of the fear of British residents that France might occupy the islands as it already had Tahiti.
By 1900, the islands were annexed as British territory. In 1901, the islands were included within the boundaries of the Colony of New Zealand.
The Cook Islands contain 15 islands in the group spread over a vast area in the South Pacific. The majority of islands are low coral atolls in the Northern Group, with Rarotonga, a volcanic island in the Southern Group, as the main administration and government centre. The main Cook Islands language is Rarotongan Māori. There are some variations in dialect in the 'outer' islands.
It is thought that the Cook Islands may have been settled between the years 900-1200 CE. Early settlements suggest that the settlers migrated from Tahiti, to the northeast of the Cooks. The Cook Islands continue to hold important connections with Tahiti, and this is generally found in the two countries' culture, tradition and language. It is also thought that the early settlers were true Tahitians, who landed in Rarotonga (Takitumu district). There are notable historic epics of great warriors who travel between the two nations for a wide variety of reasons. The purpose of these missions is still unclear but recent research indicates that large to small groups often fled their island due to local wars being forced upon them. For each group to travel and to survive, they would normally rely on a warrior to lead them. Outstanding warriors are still mentioned in the countries' traditions and stories.
These arrivals are evidenced by an older road in Toi, the Ara Metua, which runs around most of Rarotonga, and is believed to be at least 1200 years old. This 29 km long, paved road is a considerable achievement of ancient engineering, possibly unsurpassed elsewhere in Polynesia. The islands of Manihiki and Rakahanga trace their origins to the arrival of Toa Nui, a warrior from the Puaikura tribe of Rarotonga, and Tepaeru, a high-ranking woman from the Takitumu or Te-Au-O-Tonga tribes of Rarotonga. Tongareva was settled by an ancestor from Rakahanga called Mahuta and an Aitutaki Ariki & Chief Taruia, and possibly a group from Tahiti. The remainder of the northern islands, Pukapuka (Te Ulu O Te Watu) was probably settled by expeditions from Samoa.
Spanish ships visited the islands in the 16th century; the first written record of contact between Europeans and the native inhabitants of the Cook Islands came with the sighting of Pukapuka by Spanish sailor Álvaro de Mendaña in 1595, who called it San Bernardo (Saint Bernard). Portuguese-Spaniard Pedro Fernández de Quirós made the first recorded European landing in the islands when he set foot on Rakahanga in 1606, calling it Gente Hermosa (Beautiful People).
British navigator Captain James Cook arrived in 1773 and 1777. Cook named the islands the 'Hervey Islands' to honour a British Lord of the Admiralty. Half a century later, the Russian Baltic German Admiral Adam Johann von Krusenstern published the Atlas de l'Ocean Pacifique, in which he renamed the islands the Cook Islands to honour Cook. Captain Cook navigated and mapped much of the group. Surprisingly, Cook never sighted the largest island, Rarotonga, and the only island that he personally set foot on was the tiny, uninhabited Palmerston Atoll.
The first recorded landing by Europeans was in 1814 by the Cumberland; trouble broke out between the sailors and the Islanders and many were killed on both sides.
The islands saw no more Europeans until missionaries arrived from England in 1821. Christianity quickly took hold in the culture and remains the predominant religion today.
In 1823, Captain John Dibbs of the colonial barque Endeavour made the first official sighting of the island Rarotonga. The Endeavour was transporting Rev. John Williams on a missionary voyage to the islands.
Brutal Peruvian slave traders, known as blackbirders, took a terrible toll on the islands of the Northern Group in 1862 and 1863. At first, the traders may have genuinely operated as labour recruiters, but they quickly turned to subterfuge and outright kidnapping to round up their human cargo. The Cook Islands was not the only island group visited by the traders, but Penrhyn Atoll was their first port of call and it has been estimated that three-quarters of the population was taken to Callao, Peru. Rakahanga and Pukapuka also suffered tremendous losses.
The Cook Islands became a British protectorate in 1888, due largely to community fears that France might occupy the territory as it had Tahiti. On 6 September 1900, the leading islanders presented a petition asking that the islands (including Niue "if possible") should be annexed as British territory. On 8–9 October 1900, seven instruments of cession of Rarotonga and other islands were signed by their chiefs and people, and a British proclamation issued at the same time accepted the cessions, the islands being declared parts of Her Britannic Majesty's dominions. These instruments did not include Aitutaki. It appears that, though the inhabitants regarded themselves as British subjects, the Crown's title was uncertain, and the island was formally annexed by Proclamation dated 9 October 1900. The islands were included within the boundaries of the Colony of New Zealand in 1901 by Order in Council under the Colonial Boundaries Act, 1895 of the United Kingdom. The boundary change became effective on 11 June 1901, and the Cook Islands have had a formal relationship with New Zealand since that time.
In 1962 New Zealand asked the Cook Islands legislature to vote on four options for the future: independence, self-government, integration into New Zealand, or integration into a larger Polynesian federation. The legislature decided upon self-government. Following elections in 1965, the Cook Islands transitioned to become a self-governing territory in free association with New Zealand. This arrangement left the Cook Islands politically independent, but officially remaining under New Zealand sovereignty. This political transition was approved by the United Nations. Despite this status change, the islands remained financially dependent on New Zealand, and New Zealand believed that a failure of the free association agreement would lead to integration rather than full independence.
New Zealand is tasked with overseeing the country's foreign relations and defense. The Cook Islands, Niue, and New Zealand (with its territories: Tokelau and the Ross Dependency) make up the Realm of New Zealand.
After achieving autonomy in 1965, the Cook Islands elected Albert Henry of the Cook Islands Party as their first Prime Minister. He led the country until 1978 when he was accused of vote-rigging. He was succeeded by Tom Davis of the Democratic Party.
On 11 June 1980, the United States signed a treaty with the Cook Islands specifying the maritime border between the Cook Islands and American Samoa and also relinquishing the US claim to the islands of Penrhyn, Pukapuka, Manihiki, and Rakahanga. In 1990, the Cook Islands signed a treaty with France which delimited the maritime boundary between the Cook Islands and French Polynesia.
On June 13, 2008, a small majority of members of the House of Ariki attempted a coup, claiming to dissolve the elected government and to take control of the country's leadership. "Basically we are dissolving the leadership, the prime minister and the deputy prime minister and the ministers," chief Makea Vakatini Joseph Ariki explained. The Cook Islands Herald suggested that the ariki were attempting thereby to regain some of their traditional prestige or mana. Prime Minister Jim Marurai described the take-over move as "ill-founded and nonsensical". By June 23, the situation appeared to have normalised, with members of the House of Ariki accepting to return to their regular duties.
900 - first People arrive to the islands
1595 — Spaniard Álvaro de Mendaña de Neira is the first European to sight the islands.
1606 — Portuguese-Spaniard Pedro Fernández de Quirós makes the first recorded European landing in the islands when he sets foot on Rakahanga.
1773 — Captain James Cook explores the islands and names them the Hervey Islands. Fifty years later they are renamed in his honour by Russian Admiral Adam Johann von Krusenstern.
1821 — English and Tahitian missionaries land in Aitutaki, become the first non-Polynesian settlers.
1823 — English missionary John Williams lands in Rarotonga, converting Makea Pori Ariki to Christianity.
1858 — The Cook Islands become united as a state, the Kingdom of Rarotonga.
1862 — Peruvian slave traders take a terrible toll on the islands of Penrhyn, Rakahanga and Pukapuka in 1862 and 1863.
1888 — Cook Islands are proclaimed a British protectorate and a single federal parliament is established.
1900 — The Cook Islands are ceded to the United Kingdom as British territory, except for Aitutaki which was annexed by the United Kingdom at the same time.
1901 — The boundaries of the Colony of New Zealand are extended by the United Kingdom to include the Cook Islands.
1924 — The All Black Invincibles stop in Rarotonga on their way to the United Kingdom and play a friendly match against a scratch Rarotongan team.
1946 — Legislative Council is established. For the first time since 1912, the territory has direct representation.
1957 — Legislative Council is reorganized as the Legislative Assembly.
1965 — The Cook Islands become a self-governing territory in free association with New Zealand. Albert Henry, leader of the Cook Islands Party, is elected as the territory's first prime minister.
1974 — Albert Henry is knighted by Queen Elizabeth II
1979 — Sir Albert Henry is found guilty of electoral fraud and stripped of his premiership and his knighthood. Tom Davis becomes Premier.
1980 — Cook Islands – United States Maritime Boundary Treaty establishes the Cook Islands – American Samoa boundary
1981 — Constitution is amended. Legislative Assembly is renamed Parliament, which grows from 22 to 24 seats, and the parliamentary term is extended from four to five years. Tom Davis is knighted.
1984 — The country's first coalition government, between Sir Thomas and Geoffrey Henry, is signed in the lead up to hosting regional Mini Games in 1985. Shifting coalitions saw ten years of political instability. At one stage, all but two MPs were in government.
1985 — Rarotonga Treaty is opened for signing in the Cook Islands, creating a nuclear-free zone in the South Pacific.
1986 — In January 1986, following the rift between New Zealand and the US in respect of the ANZUS security arrangements Prime Minister Tom Davis declared the Cook Islands a neutral country, because he considered that New Zealand (which has control over the islands' defence and foreign policy) was no longer in a position to defend the islands. The proclamation of neutrality meant that the Cook Islands would not enter into a military relationship with any foreign power, and, in particular, would prohibit visits by US warships. Visits by US naval vessels were allowed to resume by Henry's Government.
1990 — Cook Islands – France Maritime Delimitation Agreement establishes the Cook Islands–French Polynesia boundary
1991 — The Cook Islands signed a treaty of friendship and co-operation with France, covering economic development, trade and surveillance of the islands' EEZ. The establishment of closer relations with France was widely regarded as an expression of the Cook Islands' Government's dissatisfaction with existing arrangements with New Zealand which was no longer in a position to defend the Cook Islands.
1995 — The French Government resumed its programme of nuclear-weapons testing at Mururoa Atoll in September 1995 upsetting the Cook Islands. New Prime Minister Geoffrey Henry was fiercely critical of the decision and dispatched a vaka (traditional voyaging canoe) with a crew of Cook Islands' traditional warriors to protest near the test site. The tests were concluded in January 1996 and a moratorium was placed on future testing by the French government.
1997 — Full diplomatic relations established with the People's Republic of China.
1997 — In November, Cyclone Martin in Manihiki kills at least six people; 80% of buildings are damaged and the black pearl industry suffered severe losses.
1999 — A second era of political instability begins, starting with five different coalitions in less than nine months, and at least as many since then.
2000 — Full diplomatic relations concluded with France.
2002 — Prime Minister Terepai Maoate is ousted from government following second vote of no-confidence in his leadership.
2004 — Prime Minister Robert Woonton visits China; Chinese Premier Wen Jiabao grants $16 million in development aid.
2006 — Parliamentary elections held. The Democratic Party keeps majority of seats in parliament, but is unable to command a majority for confidence, forcing a coalition with breakaway MPs who left, then rejoined the "Demos".
2008 — Pacific Island nations imposed a series of measures aimed at halting overfishing. | [
{
"paragraph_id": 0,
"text": "The Cook Islands are named after Captain James Cook, who visited the islands in 1773 and 1777, although Spanish navigator Alvaro de Mendaña was the first European to reach the islands in 1595. The Cook Islands became aligned to the United Kingdom in 1890, largely because of the fear of British residents that France might occupy the islands as it already had Tahiti.",
"title": ""
},
{
"paragraph_id": 1,
"text": "By 1900, the islands were annexed as British territory. In 1901, the islands were included within the boundaries of the Colony of New Zealand.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The Cook Islands contain 15 islands in the group spread over a vast area in the South Pacific. The majority of islands are low coral atolls in the Northern Group, with Rarotonga, a volcanic island in the Southern Group, as the main administration and government centre. The main Cook Islands language is Rarotongan Māori. There are some variations in dialect in the 'outer' islands.",
"title": ""
},
{
"paragraph_id": 3,
"text": "It is thought that the Cook Islands may have been settled between the years 900-1200 CE. Early settlements suggest that the settlers migrated from Tahiti, to the northeast of the Cooks. The Cook Islands continue to hold important connections with Tahiti, and this is generally found in the two countries' culture, tradition and language. It is also thought that the early settlers were true Tahitians, who landed in Rarotonga (Takitumu district). There are notable historic epics of great warriors who travel between the two nations for a wide variety of reasons. The purpose of these missions is still unclear but recent research indicates that large to small groups often fled their island due to local wars being forced upon them. For each group to travel and to survive, they would normally rely on a warrior to lead them. Outstanding warriors are still mentioned in the countries' traditions and stories.",
"title": "Early settlers of the Cooks"
},
{
"paragraph_id": 4,
"text": "These arrivals are evidenced by an older road in Toi, the Ara Metua, which runs around most of Rarotonga, and is believed to be at least 1200 years old. This 29 km long, paved road is a considerable achievement of ancient engineering, possibly unsurpassed elsewhere in Polynesia. The islands of Manihiki and Rakahanga trace their origins to the arrival of Toa Nui, a warrior from the Puaikura tribe of Rarotonga, and Tepaeru, a high-ranking woman from the Takitumu or Te-Au-O-Tonga tribes of Rarotonga. Tongareva was settled by an ancestor from Rakahanga called Mahuta and an Aitutaki Ariki & Chief Taruia, and possibly a group from Tahiti. The remainder of the northern islands, Pukapuka (Te Ulu O Te Watu) was probably settled by expeditions from Samoa.",
"title": "Early settlers of the Cooks"
},
{
"paragraph_id": 5,
"text": "Spanish ships visited the islands in the 16th century; the first written record of contact between Europeans and the native inhabitants of the Cook Islands came with the sighting of Pukapuka by Spanish sailor Álvaro de Mendaña in 1595, who called it San Bernardo (Saint Bernard). Portuguese-Spaniard Pedro Fernández de Quirós made the first recorded European landing in the islands when he set foot on Rakahanga in 1606, calling it Gente Hermosa (Beautiful People).",
"title": "Early European contact"
},
{
"paragraph_id": 6,
"text": "British navigator Captain James Cook arrived in 1773 and 1777. Cook named the islands the 'Hervey Islands' to honour a British Lord of the Admiralty. Half a century later, the Russian Baltic German Admiral Adam Johann von Krusenstern published the Atlas de l'Ocean Pacifique, in which he renamed the islands the Cook Islands to honour Cook. Captain Cook navigated and mapped much of the group. Surprisingly, Cook never sighted the largest island, Rarotonga, and the only island that he personally set foot on was the tiny, uninhabited Palmerston Atoll.",
"title": "Early European contact"
},
{
"paragraph_id": 7,
"text": "The first recorded landing by Europeans was in 1814 by the Cumberland; trouble broke out between the sailors and the Islanders and many were killed on both sides.",
"title": "Early European contact"
},
{
"paragraph_id": 8,
"text": "The islands saw no more Europeans until missionaries arrived from England in 1821. Christianity quickly took hold in the culture and remains the predominant religion today.",
"title": "Early European contact"
},
{
"paragraph_id": 9,
"text": "In 1823, Captain John Dibbs of the colonial barque Endeavour made the first official sighting of the island Rarotonga. The Endeavour was transporting Rev. John Williams on a missionary voyage to the islands.",
"title": "Early European contact"
},
{
"paragraph_id": 10,
"text": "Brutal Peruvian slave traders, known as blackbirders, took a terrible toll on the islands of the Northern Group in 1862 and 1863. At first, the traders may have genuinely operated as labour recruiters, but they quickly turned to subterfuge and outright kidnapping to round up their human cargo. The Cook Islands was not the only island group visited by the traders, but Penrhyn Atoll was their first port of call and it has been estimated that three-quarters of the population was taken to Callao, Peru. Rakahanga and Pukapuka also suffered tremendous losses.",
"title": "Early European contact"
},
{
"paragraph_id": 11,
"text": "The Cook Islands became a British protectorate in 1888, due largely to community fears that France might occupy the territory as it had Tahiti. On 6 September 1900, the leading islanders presented a petition asking that the islands (including Niue \"if possible\") should be annexed as British territory. On 8–9 October 1900, seven instruments of cession of Rarotonga and other islands were signed by their chiefs and people, and a British proclamation issued at the same time accepted the cessions, the islands being declared parts of Her Britannic Majesty's dominions. These instruments did not include Aitutaki. It appears that, though the inhabitants regarded themselves as British subjects, the Crown's title was uncertain, and the island was formally annexed by Proclamation dated 9 October 1900. The islands were included within the boundaries of the Colony of New Zealand in 1901 by Order in Council under the Colonial Boundaries Act, 1895 of the United Kingdom. The boundary change became effective on 11 June 1901, and the Cook Islands have had a formal relationship with New Zealand since that time.",
"title": "British protectorate"
},
{
"paragraph_id": 12,
"text": "In 1962 New Zealand asked the Cook Islands legislature to vote on four options for the future: independence, self-government, integration into New Zealand, or integration into a larger Polynesian federation. The legislature decided upon self-government. Following elections in 1965, the Cook Islands transitioned to become a self-governing territory in free association with New Zealand. This arrangement left the Cook Islands politically independent, but officially remaining under New Zealand sovereignty. This political transition was approved by the United Nations. Despite this status change, the islands remained financially dependent on New Zealand, and New Zealand believed that a failure of the free association agreement would lead to integration rather than full independence.",
"title": "Recent history"
},
{
"paragraph_id": 13,
"text": "New Zealand is tasked with overseeing the country's foreign relations and defense. The Cook Islands, Niue, and New Zealand (with its territories: Tokelau and the Ross Dependency) make up the Realm of New Zealand.",
"title": "Recent history"
},
{
"paragraph_id": 14,
"text": "After achieving autonomy in 1965, the Cook Islands elected Albert Henry of the Cook Islands Party as their first Prime Minister. He led the country until 1978 when he was accused of vote-rigging. He was succeeded by Tom Davis of the Democratic Party.",
"title": "Recent history"
},
{
"paragraph_id": 15,
"text": "On 11 June 1980, the United States signed a treaty with the Cook Islands specifying the maritime border between the Cook Islands and American Samoa and also relinquishing the US claim to the islands of Penrhyn, Pukapuka, Manihiki, and Rakahanga. In 1990, the Cook Islands signed a treaty with France which delimited the maritime boundary between the Cook Islands and French Polynesia.",
"title": "Recent history"
},
{
"paragraph_id": 16,
"text": "On June 13, 2008, a small majority of members of the House of Ariki attempted a coup, claiming to dissolve the elected government and to take control of the country's leadership. \"Basically we are dissolving the leadership, the prime minister and the deputy prime minister and the ministers,\" chief Makea Vakatini Joseph Ariki explained. The Cook Islands Herald suggested that the ariki were attempting thereby to regain some of their traditional prestige or mana. Prime Minister Jim Marurai described the take-over move as \"ill-founded and nonsensical\". By June 23, the situation appeared to have normalised, with members of the House of Ariki accepting to return to their regular duties.",
"title": "Recent history"
},
{
"paragraph_id": 17,
"text": "900 - first People arrive to the islands",
"title": "Timeline"
},
{
"paragraph_id": 18,
"text": "1595 — Spaniard Álvaro de Mendaña de Neira is the first European to sight the islands.",
"title": "Timeline"
},
{
"paragraph_id": 19,
"text": "1606 — Portuguese-Spaniard Pedro Fernández de Quirós makes the first recorded European landing in the islands when he sets foot on Rakahanga.",
"title": "Timeline"
},
{
"paragraph_id": 20,
"text": "1773 — Captain James Cook explores the islands and names them the Hervey Islands. Fifty years later they are renamed in his honour by Russian Admiral Adam Johann von Krusenstern.",
"title": "Timeline"
},
{
"paragraph_id": 21,
"text": "1821 — English and Tahitian missionaries land in Aitutaki, become the first non-Polynesian settlers.",
"title": "Timeline"
},
{
"paragraph_id": 22,
"text": "1823 — English missionary John Williams lands in Rarotonga, converting Makea Pori Ariki to Christianity.",
"title": "Timeline"
},
{
"paragraph_id": 23,
"text": "1858 — The Cook Islands become united as a state, the Kingdom of Rarotonga.",
"title": "Timeline"
},
{
"paragraph_id": 24,
"text": "1862 — Peruvian slave traders take a terrible toll on the islands of Penrhyn, Rakahanga and Pukapuka in 1862 and 1863.",
"title": "Timeline"
},
{
"paragraph_id": 25,
"text": "1888 — Cook Islands are proclaimed a British protectorate and a single federal parliament is established.",
"title": "Timeline"
},
{
"paragraph_id": 26,
"text": "1900 — The Cook Islands are ceded to the United Kingdom as British territory, except for Aitutaki which was annexed by the United Kingdom at the same time.",
"title": "Timeline"
},
{
"paragraph_id": 27,
"text": "1901 — The boundaries of the Colony of New Zealand are extended by the United Kingdom to include the Cook Islands.",
"title": "Timeline"
},
{
"paragraph_id": 28,
"text": "1924 — The All Black Invincibles stop in Rarotonga on their way to the United Kingdom and play a friendly match against a scratch Rarotongan team.",
"title": "Timeline"
},
{
"paragraph_id": 29,
"text": "1946 — Legislative Council is established. For the first time since 1912, the territory has direct representation.",
"title": "Timeline"
},
{
"paragraph_id": 30,
"text": "1957 — Legislative Council is reorganized as the Legislative Assembly.",
"title": "Timeline"
},
{
"paragraph_id": 31,
"text": "1965 — The Cook Islands become a self-governing territory in free association with New Zealand. Albert Henry, leader of the Cook Islands Party, is elected as the territory's first prime minister.",
"title": "Timeline"
},
{
"paragraph_id": 32,
"text": "1974 — Albert Henry is knighted by Queen Elizabeth II",
"title": "Timeline"
},
{
"paragraph_id": 33,
"text": "1979 — Sir Albert Henry is found guilty of electoral fraud and stripped of his premiership and his knighthood. Tom Davis becomes Premier.",
"title": "Timeline"
},
{
"paragraph_id": 34,
"text": "1980 — Cook Islands – United States Maritime Boundary Treaty establishes the Cook Islands – American Samoa boundary",
"title": "Timeline"
},
{
"paragraph_id": 35,
"text": "1981 — Constitution is amended. Legislative Assembly is renamed Parliament, which grows from 22 to 24 seats, and the parliamentary term is extended from four to five years. Tom Davis is knighted.",
"title": "Timeline"
},
{
"paragraph_id": 36,
"text": "1984 — The country's first coalition government, between Sir Thomas and Geoffrey Henry, is signed in the lead up to hosting regional Mini Games in 1985. Shifting coalitions saw ten years of political instability. At one stage, all but two MPs were in government.",
"title": "Timeline"
},
{
"paragraph_id": 37,
"text": "1985 — Rarotonga Treaty is opened for signing in the Cook Islands, creating a nuclear-free zone in the South Pacific.",
"title": "Timeline"
},
{
"paragraph_id": 38,
"text": "1986 — In January 1986, following the rift between New Zealand and the US in respect of the ANZUS security arrangements Prime Minister Tom Davis declared the Cook Islands a neutral country, because he considered that New Zealand (which has control over the islands' defence and foreign policy) was no longer in a position to defend the islands. The proclamation of neutrality meant that the Cook Islands would not enter into a military relationship with any foreign power, and, in particular, would prohibit visits by US warships. Visits by US naval vessels were allowed to resume by Henry's Government.",
"title": "Timeline"
},
{
"paragraph_id": 39,
"text": "1990 — Cook Islands – France Maritime Delimitation Agreement establishes the Cook Islands–French Polynesia boundary",
"title": "Timeline"
},
{
"paragraph_id": 40,
"text": "1991 — The Cook Islands signed a treaty of friendship and co-operation with France, covering economic development, trade and surveillance of the islands' EEZ. The establishment of closer relations with France was widely regarded as an expression of the Cook Islands' Government's dissatisfaction with existing arrangements with New Zealand which was no longer in a position to defend the Cook Islands.",
"title": "Timeline"
},
{
"paragraph_id": 41,
"text": "1995 — The French Government resumed its programme of nuclear-weapons testing at Mururoa Atoll in September 1995 upsetting the Cook Islands. New Prime Minister Geoffrey Henry was fiercely critical of the decision and dispatched a vaka (traditional voyaging canoe) with a crew of Cook Islands' traditional warriors to protest near the test site. The tests were concluded in January 1996 and a moratorium was placed on future testing by the French government.",
"title": "Timeline"
},
{
"paragraph_id": 42,
"text": "1997 — Full diplomatic relations established with the People's Republic of China.",
"title": "Timeline"
},
{
"paragraph_id": 43,
"text": "1997 — In November, Cyclone Martin in Manihiki kills at least six people; 80% of buildings are damaged and the black pearl industry suffered severe losses.",
"title": "Timeline"
},
{
"paragraph_id": 44,
"text": "1999 — A second era of political instability begins, starting with five different coalitions in less than nine months, and at least as many since then.",
"title": "Timeline"
},
{
"paragraph_id": 45,
"text": "2000 — Full diplomatic relations concluded with France.",
"title": "Timeline"
},
{
"paragraph_id": 46,
"text": "2002 — Prime Minister Terepai Maoate is ousted from government following second vote of no-confidence in his leadership.",
"title": "Timeline"
},
{
"paragraph_id": 47,
"text": "2004 — Prime Minister Robert Woonton visits China; Chinese Premier Wen Jiabao grants $16 million in development aid.",
"title": "Timeline"
},
{
"paragraph_id": 48,
"text": "2006 — Parliamentary elections held. The Democratic Party keeps majority of seats in parliament, but is unable to command a majority for confidence, forcing a coalition with breakaway MPs who left, then rejoined the \"Demos\".",
"title": "Timeline"
},
{
"paragraph_id": 49,
"text": "2008 — Pacific Island nations imposed a series of measures aimed at halting overfishing.",
"title": "Timeline"
}
] | The Cook Islands are named after Captain James Cook, who visited the islands in 1773 and 1777, although Spanish navigator Alvaro de Mendaña was the first European to reach the islands in 1595. The Cook Islands became aligned to the United Kingdom in 1890, largely because of the fear of British residents that France might occupy the islands as it already had Tahiti. By 1900, the islands were annexed as British territory. In 1901, the islands were included within the boundaries of the Colony of New Zealand. The Cook Islands contain 15 islands in the group spread over a vast area in the South Pacific. The majority of islands are low coral atolls in the Northern Group, with Rarotonga, a volcanic island in the Southern Group, as the main administration and government centre. The main Cook Islands language is Rarotongan Māori. There are some variations in dialect in the 'outer' islands. | 2001-04-26T17:17:56Z | 2023-09-21T18:45:34Z | [
"Template:Commons category",
"Template:Reflist",
"Template:Cite book",
"Template:Cite web",
"Template:History of Oceania",
"Template:Territories of the British Empire",
"Template:Short description",
"Template:Center"
] | https://en.wikipedia.org/wiki/History_of_the_Cook_Islands |
7,069 | Geography of the Cook Islands | 21°14′S 159°46′W / 21.233°S 159.767°W / -21.233; -159.767
The Cook Islands can be divided into two groups: the Southern Cook Islands and the Northern Cook Islands. The country is located in Oceania, in the South Pacific Ocean, about halfway between Hawaii and New Zealand.
From March to December, the Cook Islands are in the path of tropical cyclones, the most notable of which were cyclones Martin (1997) and Percy (2005). Two terrestrial ecoregions lie within the islands' territory: the Central Polynesian tropical moist forests and the Cook Islands tropical moist forests.
Note: The table is ordered from north to south. Population figures from the 2016 census.
This article incorporates public domain material from The World Factbook. CIA. | [
{
"paragraph_id": 0,
"text": "21°14′S 159°46′W / 21.233°S 159.767°W / -21.233; -159.767",
"title": ""
},
{
"paragraph_id": 1,
"text": "The Cook Islands can be divided into two groups: the Southern Cook Islands and the Northern Cook Islands. The country is located in Oceania, in the South Pacific Ocean, about halfway between Hawaii and New Zealand.",
"title": ""
},
{
"paragraph_id": 2,
"text": "From March to December, the Cook Islands are in the path of tropical cyclones, the most notable of which were cyclones Martin (1997) and Percy (2005). Two terrestrial ecoregions lie within the islands' territory: the Central Polynesian tropical moist forests and the Cook Islands tropical moist forests.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Note: The table is ordered from north to south. Population figures from the 2016 census.",
"title": "Islands and reefs"
},
{
"paragraph_id": 4,
"text": "This article incorporates public domain material from The World Factbook. CIA.",
"title": "References"
},
{
"paragraph_id": 5,
"text": "",
"title": "External links"
}
] | The Cook Islands can be divided into two groups: the Southern Cook Islands and the Northern Cook Islands. The country is located in Oceania, in the South Pacific Ocean, about halfway between Hawaii and New Zealand. From March to December, the Cook Islands are in the path of tropical cyclones, the most notable of which were cyclones Martin (1997) and Percy (2005). Two terrestrial ecoregions lie within the islands' territory: the Central Polynesian tropical moist forests and the Cook Islands tropical moist forests. | 2001-04-26T17:18:17Z | 2023-09-09T05:15:23Z | [
"Template:CIA World Factbook",
"Template:Geography of Oceania",
"Template:CookIslands-geo-stub",
"Template:CIA",
"Template:Convert",
"Template:Reflist",
"Template:Cite web",
"Template:Cite journal",
"Template:Coord"
] | https://en.wikipedia.org/wiki/Geography_of_the_Cook_Islands |
7,070 | Demographics of the Cook Islands | Demographic features of the population of the Cook Islands include population density, ethnicity, education level, health of the populace, economic status, religious affiliations and other aspects of the population.
A census is carried out every five years in the Cook Islands. The last census was carried out in 2021 and the next census will be carried out in 2026.
Births and deaths
Religion in the Cook Islands (CIA World Factbook)
The Cook Islands are majority-Protestant, with almost half the population being members of the Reformed Cook Islands Christian Church. Other Protestant denominations include Seventh-day Adventists, Assemblies of God and the Apostolic Church (the latter two being Pentecostal denominations). The largest non-Protestant denomination are Roman Catholics, followed by the Church of Jesus Christ of Latter-day Saints. Non-Christian faiths including Hinduism, Buddhism and Islam have small followings primarily by non-indigenous inhabitants.
The indigenous Polynesian people of the Cook islands are known as Cook Islands Māori. These include speakers of Cook Islands Māori language, closely related to Tahitian and New Zealand Māori, who form the majority of the population and inhabit the southern islands including Rarotonga; and also the people of Pukapuka, who speak a language more closely related to Samoan. Cook Islanders of non-indigenous descent include other Pacific Island peoples, Papa'a (Europeans), and those of Asian descent.
The following demographic statistics are from the CIA World Factbook, unless otherwise indicated. | [
{
"paragraph_id": 0,
"text": "Demographic features of the population of the Cook Islands include population density, ethnicity, education level, health of the populace, economic status, religious affiliations and other aspects of the population.",
"title": ""
},
{
"paragraph_id": 1,
"text": "A census is carried out every five years in the Cook Islands. The last census was carried out in 2021 and the next census will be carried out in 2026.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Births and deaths",
"title": "Vital statistics"
},
{
"paragraph_id": 3,
"text": "Religion in the Cook Islands (CIA World Factbook)",
"title": "Religion"
},
{
"paragraph_id": 4,
"text": "The Cook Islands are majority-Protestant, with almost half the population being members of the Reformed Cook Islands Christian Church. Other Protestant denominations include Seventh-day Adventists, Assemblies of God and the Apostolic Church (the latter two being Pentecostal denominations). The largest non-Protestant denomination are Roman Catholics, followed by the Church of Jesus Christ of Latter-day Saints. Non-Christian faiths including Hinduism, Buddhism and Islam have small followings primarily by non-indigenous inhabitants.",
"title": "Religion"
},
{
"paragraph_id": 5,
"text": "The indigenous Polynesian people of the Cook islands are known as Cook Islands Māori. These include speakers of Cook Islands Māori language, closely related to Tahitian and New Zealand Māori, who form the majority of the population and inhabit the southern islands including Rarotonga; and also the people of Pukapuka, who speak a language more closely related to Samoan. Cook Islanders of non-indigenous descent include other Pacific Island peoples, Papa'a (Europeans), and those of Asian descent.",
"title": "Ethnic groups"
},
{
"paragraph_id": 6,
"text": "The following demographic statistics are from the CIA World Factbook, unless otherwise indicated.",
"title": "CIA World Factbook demographic statistics"
},
{
"paragraph_id": 7,
"text": "",
"title": "References"
}
] | Demographic features of the population of the Cook Islands include population density, ethnicity, education level, health of the populace, economic status, religious affiliations and other aspects of the population. A census is carried out every five years in the Cook Islands. The last census was carried out in 2021 and the next census will be carried out in 2026. | 2001-04-26T17:18:35Z | 2023-09-04T17:10:22Z | [
"Template:Pie chart",
"Template:Reflist",
"Template:Cite web",
"Template:CookIslands-stub",
"Template:Historical populations",
"Template:Main",
"Template:Decrease",
"Template:Increase",
"Template:Population pyramid",
"Template:Oceania topic",
"Template:Cite book",
"Template:Hidden begin",
"Template:Hidden end"
] | https://en.wikipedia.org/wiki/Demographics_of_the_Cook_Islands |
7,071 | Politics of the Cook Islands | The politics of the Cook Islands takes place in a framework of a parliamentary representative democracy within a constitutional monarchy. The Monarch of New Zealand, represented in the Cook Islands by the King or Queen's Representative, was the Head of State; the prime minister is the head of government of a multi-party system. The nation is self-governing and are fully responsible for internal and foreign affairs. Since 2001, the Cook Islands has run its own foreign and defence policy. Executive power is exercised by the government, while legislative power is vested in both the government and the islands' parliament. The judiciary is independent of the executive and the legislatures.
The Constitution of the Cook Islands took effect on August 4, 1965, when the Cook Islands became a self-governing state in free association with New Zealand. The anniversary of these events in 1965 is commemorated annually on Constitution Day, with week long activities known as Te Maeva Nui Celebrations locally.
Ten years of rule by the Cook Islands Party (CIP) came to an end 18 November 1999 with the resignation of Prime Minister Joe Williams. Williams had led a minority government since October 1999 when the New Alliance Party (NAP) left the government coalition and joined the main opposition Democratic Party (DAP). On 18 November 1999, DAP leader Dr. Terepai Maoate was sworn in as prime minister. He was succeeded by his co-partisan Robert Woonton. When Dr Woonton lost his seat in the 2004 elections, Jim Marurai took over. In the 2010 elections, the CIP regained power and Henry Puna was sworn in as prime minister on 30 November 2010. His Deputy, Mark Brown, succeeded Puna in 2020, when Puna was elected Secretary General of the Cook Islands.
Prime Minister Mark Brown was reelected in 2022 with an increased majority
The Parliament of the Cook Islands has 24 members, elected for a five-year term in single-seat constituencies. There is also a House of Ariki, composed of chiefs, which has a purely advisory role. The Koutu Nui is a similar organization consisting of sub-chiefs. It was established by an amendment in 1972 of the 1966 House of Ariki Act. The current president is Te Tika Mataiapo Dorice Reid.
On June 13, 2008, a small majority of members of the House of Ariki attempted a coup, claiming to dissolve the elected government and to take control of the country's leadership. "Basically we are dissolving the leadership, the prime minister and the deputy prime minister and the ministers," chief Makea Vakatini Joseph Ariki explained. The Cook Islands Herald suggested that the ariki were attempting thereby to regain some of their traditional prestige or mana. Prime Minister Jim Marurai described the take-over move as "ill-founded and nonsensical". By June 23, the situation appeared to have normalised, with members of the House of Ariki accepting to return to their regular duties.
The judiciary is established by part IV of the Constitution, and consists of the High Court of the Cook Islands and the Cook Islands Court of Appeal. The Judicial Committee of the Privy Council serves as a final court of appeal. Judges are appointed by the Queen's Representative on the advice of the Executive Council as given by the Chief Justice and the Minister of Justice. Non-resident Judges are appointed for a three-year term; other Judges are appointed for life. Judges may be removed from office by the Queen's Representative on the recommendation of an investigative tribunal and only for inability to perform their office, or for misbehaviour.
With regard to the legal profession, Iaveta Taunga o Te Tini Short was the first Cook Islander to establish a law practice in 1968. He would later become a Cabinet Minister (1978) and High Commissioner for the Cook Islands (1985).
The 1999 election produced a hung Parliament. Cook Islands Party leader Geoffrey Henry remained prime minister, but was replaced after a month by Joe Williams following a coalition realignment. A further realignment three months later saw Williams replaced by Democratic Party leader Terepai Maoate. A third realignment saw Maoate replaced mid-term by his deputy Robert Woonton in 2002, who ruled with the backing of the CIP.
The Democratic Party won a majority in the 2004 election, but Woonton lost his seat, and was replaced by Jim Marurai. In 2005 Marurai left the Democrats due to an internal disputes, founding his own Cook Islands First Party. He continued to govern with the support of the CIP, but in 2005 returned to the Democrats. The loss of several by-elections forced a snap-election in 2006, which produced a solid majority for the Democrats and saw Marurai continue as prime minister.
In December 2009, Marurai sacked his Deputy Prime Minister, Terepai Maoate, sparking a mass-resignation of Democratic Party cabinet members He and new Deputy Prime Minister Robert Wigmore were subsequently expelled from the Democratic Party. Marurai appointed three junior members of the Democratic party to Cabinet, but on 31 December 2009 the party withdrew its support. | [
{
"paragraph_id": 0,
"text": "The politics of the Cook Islands takes place in a framework of a parliamentary representative democracy within a constitutional monarchy. The Monarch of New Zealand, represented in the Cook Islands by the King or Queen's Representative, was the Head of State; the prime minister is the head of government of a multi-party system. The nation is self-governing and are fully responsible for internal and foreign affairs. Since 2001, the Cook Islands has run its own foreign and defence policy. Executive power is exercised by the government, while legislative power is vested in both the government and the islands' parliament. The judiciary is independent of the executive and the legislatures.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The Constitution of the Cook Islands took effect on August 4, 1965, when the Cook Islands became a self-governing state in free association with New Zealand. The anniversary of these events in 1965 is commemorated annually on Constitution Day, with week long activities known as Te Maeva Nui Celebrations locally.",
"title": "Constitution"
},
{
"paragraph_id": 2,
"text": "Ten years of rule by the Cook Islands Party (CIP) came to an end 18 November 1999 with the resignation of Prime Minister Joe Williams. Williams had led a minority government since October 1999 when the New Alliance Party (NAP) left the government coalition and joined the main opposition Democratic Party (DAP). On 18 November 1999, DAP leader Dr. Terepai Maoate was sworn in as prime minister. He was succeeded by his co-partisan Robert Woonton. When Dr Woonton lost his seat in the 2004 elections, Jim Marurai took over. In the 2010 elections, the CIP regained power and Henry Puna was sworn in as prime minister on 30 November 2010. His Deputy, Mark Brown, succeeded Puna in 2020, when Puna was elected Secretary General of the Cook Islands.",
"title": "Executive"
},
{
"paragraph_id": 3,
"text": "Prime Minister Mark Brown was reelected in 2022 with an increased majority",
"title": "Executive"
},
{
"paragraph_id": 4,
"text": "The Parliament of the Cook Islands has 24 members, elected for a five-year term in single-seat constituencies. There is also a House of Ariki, composed of chiefs, which has a purely advisory role. The Koutu Nui is a similar organization consisting of sub-chiefs. It was established by an amendment in 1972 of the 1966 House of Ariki Act. The current president is Te Tika Mataiapo Dorice Reid.",
"title": "Legislature"
},
{
"paragraph_id": 5,
"text": "On June 13, 2008, a small majority of members of the House of Ariki attempted a coup, claiming to dissolve the elected government and to take control of the country's leadership. \"Basically we are dissolving the leadership, the prime minister and the deputy prime minister and the ministers,\" chief Makea Vakatini Joseph Ariki explained. The Cook Islands Herald suggested that the ariki were attempting thereby to regain some of their traditional prestige or mana. Prime Minister Jim Marurai described the take-over move as \"ill-founded and nonsensical\". By June 23, the situation appeared to have normalised, with members of the House of Ariki accepting to return to their regular duties.",
"title": "Legislature"
},
{
"paragraph_id": 6,
"text": "The judiciary is established by part IV of the Constitution, and consists of the High Court of the Cook Islands and the Cook Islands Court of Appeal. The Judicial Committee of the Privy Council serves as a final court of appeal. Judges are appointed by the Queen's Representative on the advice of the Executive Council as given by the Chief Justice and the Minister of Justice. Non-resident Judges are appointed for a three-year term; other Judges are appointed for life. Judges may be removed from office by the Queen's Representative on the recommendation of an investigative tribunal and only for inability to perform their office, or for misbehaviour.",
"title": "Judiciary"
},
{
"paragraph_id": 7,
"text": "With regard to the legal profession, Iaveta Taunga o Te Tini Short was the first Cook Islander to establish a law practice in 1968. He would later become a Cabinet Minister (1978) and High Commissioner for the Cook Islands (1985).",
"title": "Judiciary"
},
{
"paragraph_id": 8,
"text": "The 1999 election produced a hung Parliament. Cook Islands Party leader Geoffrey Henry remained prime minister, but was replaced after a month by Joe Williams following a coalition realignment. A further realignment three months later saw Williams replaced by Democratic Party leader Terepai Maoate. A third realignment saw Maoate replaced mid-term by his deputy Robert Woonton in 2002, who ruled with the backing of the CIP.",
"title": "Recent political history"
},
{
"paragraph_id": 9,
"text": "The Democratic Party won a majority in the 2004 election, but Woonton lost his seat, and was replaced by Jim Marurai. In 2005 Marurai left the Democrats due to an internal disputes, founding his own Cook Islands First Party. He continued to govern with the support of the CIP, but in 2005 returned to the Democrats. The loss of several by-elections forced a snap-election in 2006, which produced a solid majority for the Democrats and saw Marurai continue as prime minister.",
"title": "Recent political history"
},
{
"paragraph_id": 10,
"text": "In December 2009, Marurai sacked his Deputy Prime Minister, Terepai Maoate, sparking a mass-resignation of Democratic Party cabinet members He and new Deputy Prime Minister Robert Wigmore were subsequently expelled from the Democratic Party. Marurai appointed three junior members of the Democratic party to Cabinet, but on 31 December 2009 the party withdrew its support.",
"title": "Recent political history"
},
{
"paragraph_id": 11,
"text": "",
"title": "Recent political history"
}
] | The politics of the Cook Islands takes place in a framework of a parliamentary representative democracy within a constitutional monarchy. The Monarch of New Zealand, represented in the Cook Islands by the King or Queen's Representative, was the Head of State; the prime minister is the head of government of a multi-party system. The nation is self-governing and are fully responsible for internal and foreign affairs. Since 2001, the Cook Islands has run its own foreign and defence policy. Executive power is exercised by the government, while legislative power is vested in both the government and the islands' parliament. The judiciary is independent of the executive and the legislatures. | 2001-04-26T17:18:59Z | 2023-09-27T13:43:43Z | [
"Template:Elect",
"Template:Cite book",
"Template:Navboxes",
"Template:Cite news",
"Template:Cite web",
"Template:Short description",
"Template:Politics of the Cook Islands",
"Template:Office-table",
"Template:Main",
"Template:Reflist"
] | https://en.wikipedia.org/wiki/Politics_of_the_Cook_Islands |
7,072 | Economy of the Cook Islands | The economy of the Cook Islands is based mainly on tourism, with minor exports made up of tropical and citrus fruit. Manufacturing activities are limited to fruit-processing, clothing and handicrafts.
As in many other South Pacific nations, the Cook Islands's economy is hindered by the country's isolation from foreign markets, lack of natural resources aside from fish, periodic devastation from natural disasters, and inadequate infrastructure.
Trade deficits are made up for by remittances from emigrants and by foreign aid, overwhelmingly from New Zealand. Efforts to exploit tourism potential, encourage offshore banking, and expand the mining and fishing industries have been partially successful in stimulating investment and growth.
Banks in the Cook Islands are regulated under the Banking Act 2011. Banks must be licensed and are supervised by the Cook Islands Financial Supervisory Commission.
The Cook Islands developed an offshore financial services industry in the early 1980s. Allegations that New Zealand-based companies were using it as a tax haven led to the Winebox Inquiry in New Zealand in the 1990s, and in 2000 it was listed as a tax haven by the OECD. In 2002 it was delisted after it agreed to fiscal transparency and to exchange tax information. Allegations of being a tax haven re-emerged in 2013 following the International Consortium of Investigative Journalists Offshore Leaks. Trusts incorporated in the Cook Islands are used to provide anonymity and asset-protection. The Cook Islands also featured in the Panama Papers, Paradise Papers, and Pandora Papers financial leaks.
Economist Vaine Nooana-Arioka has been executive director of the Bank of the Cook Islands since 2008.
Telecom Cook Islands Ltd (TCI) is the sole provider of telecommunications in the Cook Islands. TCI is a private company owned by Spark New Zealand Ltd (60%) and the Cook Islands Government (40%). In operation since July 1991, TCI provides local, national and international telecommunications as well as internet access on all islands except Suwarrow. Communications to Suwarrow is via HF radio. | [
{
"paragraph_id": 0,
"text": "The economy of the Cook Islands is based mainly on tourism, with minor exports made up of tropical and citrus fruit. Manufacturing activities are limited to fruit-processing, clothing and handicrafts.",
"title": ""
},
{
"paragraph_id": 1,
"text": "As in many other South Pacific nations, the Cook Islands's economy is hindered by the country's isolation from foreign markets, lack of natural resources aside from fish, periodic devastation from natural disasters, and inadequate infrastructure.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Trade deficits are made up for by remittances from emigrants and by foreign aid, overwhelmingly from New Zealand. Efforts to exploit tourism potential, encourage offshore banking, and expand the mining and fishing industries have been partially successful in stimulating investment and growth.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Banks in the Cook Islands are regulated under the Banking Act 2011. Banks must be licensed and are supervised by the Cook Islands Financial Supervisory Commission.",
"title": "Banking and finance"
},
{
"paragraph_id": 4,
"text": "The Cook Islands developed an offshore financial services industry in the early 1980s. Allegations that New Zealand-based companies were using it as a tax haven led to the Winebox Inquiry in New Zealand in the 1990s, and in 2000 it was listed as a tax haven by the OECD. In 2002 it was delisted after it agreed to fiscal transparency and to exchange tax information. Allegations of being a tax haven re-emerged in 2013 following the International Consortium of Investigative Journalists Offshore Leaks. Trusts incorporated in the Cook Islands are used to provide anonymity and asset-protection. The Cook Islands also featured in the Panama Papers, Paradise Papers, and Pandora Papers financial leaks.",
"title": "Banking and finance"
},
{
"paragraph_id": 5,
"text": "Economist Vaine Nooana-Arioka has been executive director of the Bank of the Cook Islands since 2008.",
"title": "Banking and finance"
},
{
"paragraph_id": 6,
"text": "Telecom Cook Islands Ltd (TCI) is the sole provider of telecommunications in the Cook Islands. TCI is a private company owned by Spark New Zealand Ltd (60%) and the Cook Islands Government (40%). In operation since July 1991, TCI provides local, national and international telecommunications as well as internet access on all islands except Suwarrow. Communications to Suwarrow is via HF radio.",
"title": "Telecommunications"
}
] | The economy of the Cook Islands is based mainly on tourism, with minor exports made up of tropical and citrus fruit. Manufacturing activities are limited to fruit-processing, clothing and handicrafts. As in many other South Pacific nations, the Cook Islands's economy is hindered by the country's isolation from foreign markets, lack of natural resources aside from fish, periodic devastation from natural disasters, and inadequate infrastructure. Trade deficits are made up for by remittances from emigrants and by foreign aid, overwhelmingly from New Zealand. Efforts to exploit tourism potential, encourage offshore banking, and expand the mining and fishing industries have been partially successful in stimulating investment and growth. | 2022-12-02T01:58:23Z | [
"Template:Update",
"Template:Convert",
"Template:Reflist",
"Template:Cite web",
"Template:Cbignore",
"Template:Cook Islands topics",
"Template:Oceania in topic"
] | https://en.wikipedia.org/wiki/Economy_of_the_Cook_Islands |
|
7,073 | Telecommunications in the Cook Islands | Like most countries and territories in Oceania, telecommunications in the Cook Islands is limited by its isolation and low population, with only one major television broadcasting station and six radio stations. However, most residents have a main line or mobile phone. Its telecommunications are mainly provided by Telecom Cook Islands, who is currently working with O3b Networks, Ltd. for faster Internet connection.
In February 2015 the former owner of Telecom Cook Islands Ltd., Spark New Zealand, sold its 60% interest for approximately NZD 23 million (US$17.3 million) to Teleraro Limited.
In July 2012, there were about 7,500 main line telephones, which covers about 98% of the country's population. There were approximately 7,800 mobile phones in 2009. Telecom Cook Islands, owned by Spark New Zealand, is the islands' main telephone system and offers international direct dialling, Internet, email, fax, and Telex. The individual islands are connected by a combination of satellite earth stations, microwave systems, and very high frequency and high frequency radiotelephone; within the islands, service is provided by small exchanges connected to subscribers by open wire, cable, and fibre-optic cable. For international communication, they rely on the satellite earth station Intelsat.
In 2003, the largest island of Rarotonga started using a GSM/GPRS mobile data service system with GSM 900 by 2013 3G UMTS 900 was introduce covering 98% of Rarotonga with HSPA+. In March 2017 4G+ launch in Rarotonga with LTE700 (B28A) and LTE1800 (B3).
Mobile service covers Aitutaki GSM/GPRS mobile data service system in GSM 900 from 2006 to 2013 while in 2014 3G UMTS 900 was introduce with HSPA+ stand system. In March 2017 4G+ also launch in Aitutaki with LTE700 (B28A). The rest of the Outer Islands (Pa Enua) mobile was well establish in 2007 with mobile coverage at GSM 900 from Mangaia 3 villages (Oneroa, Ivirua, Tamarua), Atiu, Mauke, Mitiaro, Palmerston in the Southern Group (Pa Enua Tonga) and the Northern Group (Pa Enua Tokerau) Nassau, Pukapuka, Rakahanga, Manihiki 2 Village (Tukao, Tauhunu) and Penrhyn 2 villages (Omoka Tetautua).
The Cook Islands uses the country calling code +682.
There are six radio stations in the Cook Islands, with one reaching all islands. As of 1997 there were 14,000 radios.
Cook Islands Television broadcasts from Rarotonga, providing a mix of local news and overseas-sourced programs. As of 1997 there were 4,000 television sets.
There were 6,000 Internet users in 2009 and 3,562 Internet hosts as of 2012. The country code top-level domain for the Cook Islands is .ck.
In June 2010, Telecom Cook Islands partnered with O3b Networks, Ltd. to provide faster Internet connection to the Cook Islands. On 25 June 2013 the O3b satellite constellation was launched from an Arianespace Soyuz ST-B rocket in French Guiana. The medium Earth orbit satellite orbits at 8,062 kilometres (5,009 mi) and uses the Ka band. It has a latency of about 100 milliseconds because it is much closer to Earth than standard geostationary satellites, whose latencies can be over 600 milliseconds. Although the initial launch consisted of 4 satellites, as many as 20 may be launched eventually to serve various areas with little or no optical fibre service, the first of which is the Cook Islands.
In December 2015, Alcatel-Lucent and Bluesky Pacific Group announced that they would build the Moana Cable system connecting New Zealand to Hawaii with a single fibre pair branching off to the Cook Islands. The Moana Cable is expected to be completed in 2018.
In July 2020 the Cook Islands were connected to the Manatua One Polynesia Fibre Cable, which links the Cook Islands, Niue, Samoa and Tahiti. The cable has landing points at Rarotonga and Aitutaki. | [
{
"paragraph_id": 0,
"text": "Like most countries and territories in Oceania, telecommunications in the Cook Islands is limited by its isolation and low population, with only one major television broadcasting station and six radio stations. However, most residents have a main line or mobile phone. Its telecommunications are mainly provided by Telecom Cook Islands, who is currently working with O3b Networks, Ltd. for faster Internet connection.",
"title": ""
},
{
"paragraph_id": 1,
"text": "In February 2015 the former owner of Telecom Cook Islands Ltd., Spark New Zealand, sold its 60% interest for approximately NZD 23 million (US$17.3 million) to Teleraro Limited.",
"title": ""
},
{
"paragraph_id": 2,
"text": "In July 2012, there were about 7,500 main line telephones, which covers about 98% of the country's population. There were approximately 7,800 mobile phones in 2009. Telecom Cook Islands, owned by Spark New Zealand, is the islands' main telephone system and offers international direct dialling, Internet, email, fax, and Telex. The individual islands are connected by a combination of satellite earth stations, microwave systems, and very high frequency and high frequency radiotelephone; within the islands, service is provided by small exchanges connected to subscribers by open wire, cable, and fibre-optic cable. For international communication, they rely on the satellite earth station Intelsat.",
"title": "Telephone"
},
{
"paragraph_id": 3,
"text": "In 2003, the largest island of Rarotonga started using a GSM/GPRS mobile data service system with GSM 900 by 2013 3G UMTS 900 was introduce covering 98% of Rarotonga with HSPA+. In March 2017 4G+ launch in Rarotonga with LTE700 (B28A) and LTE1800 (B3).",
"title": "Telephone"
},
{
"paragraph_id": 4,
"text": "Mobile service covers Aitutaki GSM/GPRS mobile data service system in GSM 900 from 2006 to 2013 while in 2014 3G UMTS 900 was introduce with HSPA+ stand system. In March 2017 4G+ also launch in Aitutaki with LTE700 (B28A). The rest of the Outer Islands (Pa Enua) mobile was well establish in 2007 with mobile coverage at GSM 900 from Mangaia 3 villages (Oneroa, Ivirua, Tamarua), Atiu, Mauke, Mitiaro, Palmerston in the Southern Group (Pa Enua Tonga) and the Northern Group (Pa Enua Tokerau) Nassau, Pukapuka, Rakahanga, Manihiki 2 Village (Tukao, Tauhunu) and Penrhyn 2 villages (Omoka Tetautua).",
"title": "Telephone"
},
{
"paragraph_id": 5,
"text": "The Cook Islands uses the country calling code +682.",
"title": "Telephone"
},
{
"paragraph_id": 6,
"text": "There are six radio stations in the Cook Islands, with one reaching all islands. As of 1997 there were 14,000 radios.",
"title": "Broadcasting"
},
{
"paragraph_id": 7,
"text": "Cook Islands Television broadcasts from Rarotonga, providing a mix of local news and overseas-sourced programs. As of 1997 there were 4,000 television sets.",
"title": "Broadcasting"
},
{
"paragraph_id": 8,
"text": "There were 6,000 Internet users in 2009 and 3,562 Internet hosts as of 2012. The country code top-level domain for the Cook Islands is .ck.",
"title": "Internet"
},
{
"paragraph_id": 9,
"text": "In June 2010, Telecom Cook Islands partnered with O3b Networks, Ltd. to provide faster Internet connection to the Cook Islands. On 25 June 2013 the O3b satellite constellation was launched from an Arianespace Soyuz ST-B rocket in French Guiana. The medium Earth orbit satellite orbits at 8,062 kilometres (5,009 mi) and uses the Ka band. It has a latency of about 100 milliseconds because it is much closer to Earth than standard geostationary satellites, whose latencies can be over 600 milliseconds. Although the initial launch consisted of 4 satellites, as many as 20 may be launched eventually to serve various areas with little or no optical fibre service, the first of which is the Cook Islands.",
"title": "Internet"
},
{
"paragraph_id": 10,
"text": "In December 2015, Alcatel-Lucent and Bluesky Pacific Group announced that they would build the Moana Cable system connecting New Zealand to Hawaii with a single fibre pair branching off to the Cook Islands. The Moana Cable is expected to be completed in 2018.",
"title": "Internet"
},
{
"paragraph_id": 11,
"text": "In July 2020 the Cook Islands were connected to the Manatua One Polynesia Fibre Cable, which links the Cook Islands, Niue, Samoa and Tahiti. The cable has landing points at Rarotonga and Aitutaki.",
"title": "Internet"
}
] | Like most countries and territories in Oceania, telecommunications in the Cook Islands is limited by its isolation and low population, with only one major television broadcasting station and six radio stations. However, most residents have a main line or mobile phone. Its telecommunications are mainly provided by Telecom Cook Islands, who is currently working with O3b Networks, Ltd. for faster Internet connection. In February 2015 the former owner of Telecom Cook Islands Ltd., Spark New Zealand, sold its 60% interest for approximately NZD 23 million to Teleraro Limited. | 2001-07-20T18:37:31Z | 2023-11-26T01:42:43Z | [
"Template:See",
"Template:Convert",
"Template:Reflist",
"Template:Cite web",
"Template:CIA World Factbook",
"Template:Cite news",
"Template:Oceania topic",
"Template:EngvarB",
"Template:As of",
"Template:Cite press release",
"Template:Portal bar",
"Template:Telecommunications",
"Template:Use dmy dates"
] | https://en.wikipedia.org/wiki/Telecommunications_in_the_Cook_Islands |
7,074 | Transport in the Cook Islands | This article lists transport in the Cook Islands.
The Cook Islands uses left-handed traffic. The maximum speed limit is 50 km/h. On the main island of Rarotonga, there are no traffic lights and only two roundabouts. A bus operates clockwise and anti-clockwise services around the islands coastal ring-road.
Road safety is poor. In 2011, the Cook Islands had the second-highest per-capita road deaths in the world. In 2018, crashes neared a record high, with speeding, alcohol and careless behaviour being the main causes. Motor-scooters are a common form of transport, but there was no requirement for helmets, making them a common cause of death and injuries. Legislation requiring helmets was passed in 2007, but scrapped in early 2008 before it came into force. In 2016, a law was passed requiring visitors and riders aged 16 to 25 to wear helmets, but it was widely flouted. In March 2020 the Cook Islands parliament again legislated for compulsory helmets to be worn from June 26, but implementation was delayed until July 31, and then until September 30.
The Cook Islands has no effective rail transport. Rarotonga had a 170m tourist railway, the Rarotonga Steam Railway, but it is no longer in working condition.
The Cook Islands have a long history of sea transport. The islands were colonised from Tahiti, and in turn colonised New Zealand in ocean-going waka. In the late nineteenth century, following European contact, the islands had a significant fleet of schooners, which they used to travel between islands and to trade with Tahiti and New Zealand. In 1899, locally owned shipping carried 10% of all international trade to the islands, and 66% of all trade carried by sail. Indigenous-owned shipping was driven out of business following New Zealand's acquisition of the islands, replaced by government-owned vessels, New Zealand trading companies, and the steamships of the Union Steamship Company.
International shipping is provided by Pacific Forum Line and Matson, Inc. (as EXCIL shipping). Only the port of Avatiu can handle containers, with ships unloading at Aitutaki using lighters.
There are two inter-island shipping companies: Taio Shipping, operating two vessels, and Cook Islands Towage, operating one.
In the past, shipping interruptions have led to shortages of imported goods and fuel, and electricity blackouts on the outer islands. Shipping has frequently been subsidised to ensure service. In 2019 the Cook Islands government announced that it would acquire a dedicated cargo ship for the outer islands after Cook Islands Towage's barge was sold. It subsequently delayed the purchase pending the development of a Cook Islands Shipping Roadmap, and issued a tender for a Pa Enua Shipping Charter.
The Cook Islands operates an open ship registry and has been placed on the Paris Memorandum of Understanding on Port State Control Black List as a flag of convenience. Ships registered in the Cook Islands have been used to smuggle oil from Iran in defiance of international sanctions. In February 2021 two ships were removed from the shipping register for concealing their movements by turning their Automatic identification system off. In April 2022 the motoryacht Tango owned by sanctioned Russian oligarch Viktor Vekselberg was seized in Spain. Maritime Cook Islands claimed that no other sanctioned vessels were on its registry. In July 2022 two yachts owned by sanctioned oligarch Roman Abramovich were reflagged as Cook Islands vessels, allowing them to escape arrest in Antigua and Barbuda.
The smaller islands have passages through their reefs, but these are unsuitable for large vessels.
The Cook Islands is served by one domestic airline, Air Rarotonga. A further three foreign airlines provide international service.
There is one international airport, Rarotonga International Airport. Eight airports provide local or charter services. Only Rarotonga and Aitutaki Airport are paved. | [
{
"paragraph_id": 0,
"text": "This article lists transport in the Cook Islands.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The Cook Islands uses left-handed traffic. The maximum speed limit is 50 km/h. On the main island of Rarotonga, there are no traffic lights and only two roundabouts. A bus operates clockwise and anti-clockwise services around the islands coastal ring-road.",
"title": "Road transport"
},
{
"paragraph_id": 2,
"text": "Road safety is poor. In 2011, the Cook Islands had the second-highest per-capita road deaths in the world. In 2018, crashes neared a record high, with speeding, alcohol and careless behaviour being the main causes. Motor-scooters are a common form of transport, but there was no requirement for helmets, making them a common cause of death and injuries. Legislation requiring helmets was passed in 2007, but scrapped in early 2008 before it came into force. In 2016, a law was passed requiring visitors and riders aged 16 to 25 to wear helmets, but it was widely flouted. In March 2020 the Cook Islands parliament again legislated for compulsory helmets to be worn from June 26, but implementation was delayed until July 31, and then until September 30.",
"title": "Road transport"
},
{
"paragraph_id": 3,
"text": "The Cook Islands has no effective rail transport. Rarotonga had a 170m tourist railway, the Rarotonga Steam Railway, but it is no longer in working condition.",
"title": "Rail transport"
},
{
"paragraph_id": 4,
"text": "The Cook Islands have a long history of sea transport. The islands were colonised from Tahiti, and in turn colonised New Zealand in ocean-going waka. In the late nineteenth century, following European contact, the islands had a significant fleet of schooners, which they used to travel between islands and to trade with Tahiti and New Zealand. In 1899, locally owned shipping carried 10% of all international trade to the islands, and 66% of all trade carried by sail. Indigenous-owned shipping was driven out of business following New Zealand's acquisition of the islands, replaced by government-owned vessels, New Zealand trading companies, and the steamships of the Union Steamship Company.",
"title": "Water transport"
},
{
"paragraph_id": 5,
"text": "International shipping is provided by Pacific Forum Line and Matson, Inc. (as EXCIL shipping). Only the port of Avatiu can handle containers, with ships unloading at Aitutaki using lighters.",
"title": "Water transport"
},
{
"paragraph_id": 6,
"text": "There are two inter-island shipping companies: Taio Shipping, operating two vessels, and Cook Islands Towage, operating one.",
"title": "Water transport"
},
{
"paragraph_id": 7,
"text": "In the past, shipping interruptions have led to shortages of imported goods and fuel, and electricity blackouts on the outer islands. Shipping has frequently been subsidised to ensure service. In 2019 the Cook Islands government announced that it would acquire a dedicated cargo ship for the outer islands after Cook Islands Towage's barge was sold. It subsequently delayed the purchase pending the development of a Cook Islands Shipping Roadmap, and issued a tender for a Pa Enua Shipping Charter.",
"title": "Water transport"
},
{
"paragraph_id": 8,
"text": "The Cook Islands operates an open ship registry and has been placed on the Paris Memorandum of Understanding on Port State Control Black List as a flag of convenience. Ships registered in the Cook Islands have been used to smuggle oil from Iran in defiance of international sanctions. In February 2021 two ships were removed from the shipping register for concealing their movements by turning their Automatic identification system off. In April 2022 the motoryacht Tango owned by sanctioned Russian oligarch Viktor Vekselberg was seized in Spain. Maritime Cook Islands claimed that no other sanctioned vessels were on its registry. In July 2022 two yachts owned by sanctioned oligarch Roman Abramovich were reflagged as Cook Islands vessels, allowing them to escape arrest in Antigua and Barbuda.",
"title": "Water transport"
},
{
"paragraph_id": 9,
"text": "The smaller islands have passages through their reefs, but these are unsuitable for large vessels.",
"title": "Water transport"
},
{
"paragraph_id": 10,
"text": "The Cook Islands is served by one domestic airline, Air Rarotonga. A further three foreign airlines provide international service.",
"title": "Air transport"
},
{
"paragraph_id": 11,
"text": "There is one international airport, Rarotonga International Airport. Eight airports provide local or charter services. Only Rarotonga and Aitutaki Airport are paved.",
"title": "Air transport"
}
] | This article lists transport in the Cook Islands. | 2023-07-15T04:57:05Z | [
"Template:Use dmy dates",
"Template:Further",
"Template:Reflist",
"Template:Cite web",
"Template:Commons category",
"Template:Oceania in topic"
] | https://en.wikipedia.org/wiki/Transport_in_the_Cook_Islands |
|
7,077 | Computer file | In computing, a computer file is a resource for recording data on a computer storage device, primarily identified by its filename. Just as words can be written on paper, so can data be written to a computer file. Files can be shared with and transferred between computers and mobile devices via removable media, networks, or the Internet.
Different types of computer files are designed for different purposes. A file may be designed to store an image, a written message, a video, a program, or any wide variety of other kinds of data. Certain files can store multiple data types at once.
By using computer programs, a person can open, read, change, save, and close a computer file. Computer files may be reopened, modified, and copied an arbitrary number of times.
Files are typically organized in a file system, which tracks file locations on the disk and enables user access.
The word "file" derives from the Latin filum ("a thread, string").
"File" was used in the context of computer storage as early as January 1940. In Punched Card Methods in Scientific Computation, W. J. Eckert stated, "The first extensive use of the early Hollerith Tabulator in astronomy was made by Comrie. He used it for building a table from successive differences, and for adding large numbers of harmonic terms". "Tables of functions are constructed from their differences with great efficiency, either as printed tables or as a file of punched cards."
In February 1950, in a Radio Corporation of America (RCA) advertisement in Popular Science magazine describing a new "memory" vacuum tube it had developed, RCA stated: "the results of countless computations can be kept 'on file' and taken out again. Such a 'file' now exists in a 'memory' tube developed at RCA Laboratories. Electronically it retains figures fed into calculating machines, holds them in storage while it memorizes new ones – speeds intelligent solutions through mazes of mathematics."
In 1952, "file" denoted, among other things, information stored on punched cards.
In early use, the underlying hardware, rather than the contents stored on it, was denominated a "file". For example, the IBM 350 disk drives were denominated "disk files". The introduction, c. 1961, by the Burroughs MCP and the MIT Compatible Time-Sharing System of the concept of a "file system" that managed several virtual "files" on one storage device is the origin of the contemporary denotation of the word. Although the contemporary "register file" demonstrates the early concept of files, its use has greatly decreased.
On most modern operating systems, files are organized into one-dimensional arrays of bytes. The format of a file is defined by its content since a file is solely a container for data.
On some platforms the format is indicated by its filename extension, specifying the rules for how the bytes must be organized and interpreted meaningfully. For example, the bytes of a plain text file (.txt in Windows) are associated with either ASCII or UTF-8 characters, while the bytes of image, video, and audio files are interpreted otherwise. Most file types also allocate a few bytes for metadata, which allows a file to carry some basic information about itself.
Some file systems can store arbitrary (not interpreted by the file system) file-specific data outside of the file format, but linked to the file, for example extended attributes or forks. On other file systems this can be done via sidecar files or software-specific databases. All those methods, however, are more susceptible to loss of metadata than container and archive file formats.
At any instant in time, a file has a specific size, normally expressed as a number of bytes, that indicates how much storage is occupied by the file. In most modern operating systems the size can be any non-negative whole number of bytes up to a system limit. Many older operating systems kept track only of the number of blocks or tracks occupied by a file on a physical storage device. In such systems, software employed other methods to track the exact byte count (e.g., CP/M used a special control character, Ctrl-Z, to signal the end of text files).
The general definition of a file does not require that its size have any real meaning, however, unless the data within the file happens to correspond to data within a pool of persistent storage. A special case is a zero byte file; these files can be newly created files that have not yet had any data written to them, or may serve as some kind of flag in the file system, or are accidents (the results of aborted disk operations). For example, the file to which the link /bin/ls points in a typical Unix-like system probably has a defined size that seldom changes. Compare this with /dev/null which is also a file, but as a character special file, its size is not meaningful.
Information in a computer file can consist of smaller packets of information (often called "records" or "lines") that are individually different but share some common traits. For example, a payroll file might contain information concerning all the employees in a company and their payroll details; each record in the payroll file concerns just one employee, and all the records have the common trait of being related to payroll—this is very similar to placing all payroll information into a specific filing cabinet in an office that does not have a computer. A text file may contain lines of text, corresponding to printed lines on a piece of paper. Alternatively, a file may contain an arbitrary binary image (a blob) or it may contain an executable.
The way information is grouped into a file is entirely up to how it is designed. This has led to a plethora of more or less standardized file structures for all imaginable purposes, from the simplest to the most complex. Most computer files are used by computer programs which create, modify or delete the files for their own use on an as-needed basis. The programmers who create the programs decide what files are needed, how they are to be used and (often) their names.
In some cases, computer programs manipulate files that are made visible to the computer user. For example, in a word-processing program, the user manipulates document files that the user personally names. Although the content of the document file is arranged in a format that the word-processing program understands, the user is able to choose the name and location of the file and provide the bulk of the information (such as words and text) that will be stored in the file.
Many applications pack all their data files into a single file called an archive file, using internal markers to discern the different types of information contained within. The benefits of the archive file are to lower the number of files for easier transfer, to reduce storage usage, or just to organize outdated files. The archive file must often be unpacked before next using.
The most basic operations that programs can perform on a file are:
Files on a computer can be created, moved, modified, grown, shrunk (truncated), and deleted. In most cases, computer programs that are executed on the computer handle these operations, but the user of a computer can also manipulate files if necessary. For instance, Microsoft Word files are normally created and modified by the Microsoft Word program in response to user commands, but the user can also move, rename, or delete these files directly by using a file manager program such as Windows Explorer (on Windows computers) or by command lines (CLI).
In Unix-like systems, user space programs do not operate directly, at a low level, on a file. Only the kernel deals with files, and it handles all user-space interaction with files in a manner that is transparent to the user-space programs. The operating system provides a level of abstraction, which means that interaction with a file from user-space is simply through its filename (instead of its inode). For example, rm filename will not delete the file itself, but only a link to the file. There can be many links to a file, but when they are all removed, the kernel considers that file's memory space free to be reallocated. This free space is commonly considered a security risk (due to the existence of file recovery software). Any secure-deletion program uses kernel-space (system) functions to wipe the file's data.
File moves within a file system complete almost immediately because the data content does not need to be rewritten. Only the paths need to be changed.
There are two distinct implementations of file moves.
When moving files between devices or partitions, some file managing software deletes each selected file from the source directory individually after being transferred, while other software deletes all files at once only after every file has been transferred.
With the mv command for instance, the former method is used when selecting files individually, possibly with the use of wildcards (example: mv -n sourcePath/* targetPath, while the latter method is used when selecting entire directories (example: mv -n sourcePath targetPath). Microsoft Windows Explorer uses the former method for mass storage file moves, but the latter method using Media Transfer Protocol, as described in Media Transfer Protocol § File move behavior.
The former method (individual deletion from source) has the benefit that space is released from the source device or partition imminently after the transfer has begun, meaning after the first file is finished. With the latter method, space is only freed after the transfer of the entire selection has finished.
If an incomplete file transfer with the latter method is aborted unexpectedly, perhaps due to an unexpected power-off, system halt or disconnection of a device, no space will have been freed up on the source device or partition. The user would need to merge the remaining files from the source, including the incompletely written (truncated) last file.
With the individual deletion method, the file moving software also does not need to cumulatively keep track of all files finished transferring for the case that a user manually aborts the file transfer. A file manager using the latter (afterwards deletion) method will have to only delete the files from the source directory that have already finished transferring.
In modern computer systems, files are typically accessed using names (filenames). In some operating systems, the name is associated with the file itself. In others, the file is anonymous, and is pointed to by links that have names. In the latter case, a user can identify the name of the link with the file itself, but this is a false analogue, especially where there exists more than one link to the same file.
Files (or links to files) can be located in directories. However, more generally, a directory can contain either a list of files or a list of links to files. Within this definition, it is of paramount importance that the term "file" includes directories. This permits the existence of directory hierarchies, i.e., directories containing sub-directories. A name that refers to a file within a directory must be typically unique. In other words, there must be no identical names within a directory. However, in some operating systems, a name may include a specification of type that means a directory can contain an identical name for more than one type of object such as a directory and a file.
In environments in which a file is named, a file's name and the path to the file's directory must uniquely identify it among all other files in the computer system—no two files can have the same name and path. Where a file is anonymous, named references to it will exist within a namespace. In most cases, any name within the namespace will refer to exactly zero or one file. However, any file may be represented within any namespace by zero, one or more names.
Any string of characters may be a well-formed name for a file or a link depending upon the context of application. Whether or not a name is well-formed depends on the type of computer system being used. Early computers permitted only a few letters or digits in the name of a file, but modern computers allow long names (some up to 255 characters) containing almost any combination of unicode letters or unicode digits, making it easier to understand the purpose of a file at a glance. Some computer systems allow file names to contain spaces; others do not. Case-sensitivity of file names is determined by the file system. Unix file systems are usually case sensitive and allow user-level applications to create files whose names differ only in the case of characters. Microsoft Windows supports multiple file systems, each with different policies regarding case-sensitivity. The common FAT file system can have multiple files whose names differ only in case if the user uses a disk editor to edit the file names in the directory entries. User applications, however, will usually not allow the user to create multiple files with the same name but differing in case.
Most computers organize files into hierarchies using folders, directories, or catalogs. The concept is the same irrespective of the terminology used. Each folder can contain an arbitrary number of files, and it can also contain other folders. These other folders are referred to as subfolders. Subfolders can contain still more files and folders and so on, thus building a tree-like structure in which one "master folder" (or "root folder" — the name varies from one operating system to another) can contain any number of levels of other folders and files. Folders can be named just as files can (except for the root folder, which often does not have a name). The use of folders makes it easier to organize files in a logical way.
When a computer allows the use of folders, each file and folder has not only a name of its own, but also a path, which identifies the folder or folders in which a file or folder resides. In the path, some sort of special character—such as a slash—is used to separate the file and folder names. For example, in the illustration shown in this article, the path /Payroll/Salaries/Managers uniquely identifies a file called Managers in a folder called Salaries, which in turn is contained in a folder called Payroll. The folder and file names are separated by slashes in this example; the topmost or root folder has no name, and so the path begins with a slash (if the root folder had a name, it would precede this first slash).
Many computer systems use extensions in file names to help identify what they contain, also known as the file type. On Windows computers, extensions consist of a dot (period) at the end of a file name, followed by a few letters to identify the type of file. An extension of .txt identifies a text file; a .doc extension identifies any type of document or documentation, commonly in the Microsoft Word file format; and so on. Even when extensions are used in a computer system, the degree to which the computer system recognizes and heeds them can vary; in some systems, they are required, while in other systems, they are completely ignored if they are presented.
Many modern computer systems provide methods for protecting files against accidental and deliberate damage. Computers that allow for multiple users implement file permissions to control who may or may not modify, delete, or create files and folders. For example, a given user may be granted only permission to read a file or folder, but not to modify or delete it; or a user may be given permission to read and modify files or folders, but not to execute them. Permissions may also be used to allow only certain users to see the contents of a file or folder. Permissions protect against unauthorized tampering or destruction of information in files, and keep private information confidential from unauthorized users.
Another protection mechanism implemented in many computers is a read-only flag. When this flag is turned on for a file (which can be accomplished by a computer program or by a human user), the file can be examined, but it cannot be modified. This flag is useful for critical information that must not be modified or erased, such as special files that are used only by internal parts of the computer system. Some systems also include a hidden flag to make certain files invisible; this flag is used by the computer system to hide essential system files that users should not alter.
Any file that has any useful purpose must have some physical manifestation. That is, a file (an abstract concept) in a real computer system must have a real physical analogue if it is to exist at all.
In physical terms, most computer files are stored on some type of data storage device. For example, most operating systems store files on a hard disk. Hard disks have been the ubiquitous form of non-volatile storage since the early 1960s. Where files contain only temporary information, they may be stored in RAM. Computer files can be also stored on other media in some cases, such as magnetic tapes, compact discs, Digital Versatile Discs, Zip drives, USB flash drives, etc. The use of solid state drives is also beginning to rival the hard disk drive.
In Unix-like operating systems, many files have no associated physical storage device. Examples are /dev/null and most files under directories /dev, /proc and /sys. These are virtual files: they exist as objects within the operating system kernel.
As seen by a running user program, files are usually represented either by a file control block or by a file handle. A file control block (FCB) is an area of memory which is manipulated to establish a filename etc. and then passed to the operating system as a parameter; it was used by older IBM operating systems and early PC operating systems including CP/M and early versions of MS-DOS. A file handle is generally either an opaque data type or an integer; it was introduced in around 1961 by the ALGOL-based Burroughs MCP running on the Burroughs B5000 but is now ubiquitous.
When a file is said to be corrupted, it is because its contents have been saved to the computer in such a way that they cannot be properly read, either by a human or by software. Depending on the extent of the damage, the original file can sometimes be recovered, or at least partially understood. A file may be created corrupt, or it may be corrupted at a later point through overwriting.
There are many ways by which a file can become corrupted. Most commonly, the issue happens in the process of writing the file to a disk. For example, if an image-editing program unexpectedly crashes while saving an image, that file may be corrupted because the program could not save its entirety. The program itself might warn the user that there was an error, allowing for another attempt at saving the file. Some other examples of reasons for which files become corrupted include:
Although file corruption usually happens accidentally, it may also be done on purpose as a mean of procrastination, as to fool someone else into thinking an assignment was ready at an earlier date, potentially gaining time to finish said assignment. There are services that provide on demand file corruption, which essentially fill a given file with random data so that it cannot be opened or read, yet still seems legitimate.
One of the most effective countermeasures for unintentional file corruption is backing up important files. In the event of an important file becoming corrupted, the user can simply replace it with the backed up version.
When computer files contain information that is extremely important, a back-up process is used to protect against disasters that might destroy the files. Backing up files simply means making copies of the files in a separate location so that they can be restored if something happens to the computer, or if they are deleted accidentally.
There are many ways to back up files. Most computer systems provide utility programs to assist in the back-up process, which can become very time-consuming if there are many files to safeguard. Files are often copied to removable media such as writable CDs or cartridge tapes. Copying files to another hard disk in the same computer protects against failure of one disk, but if it is necessary to protect against failure or destruction of the entire computer, then copies of the files must be made on other media that can be taken away from the computer and stored in a safe, distant location.
The grandfather-father-son backup method automatically makes three back-ups; the grandfather file is the oldest copy of the file and the son is the current copy.
The way a computer organizes, names, stores and manipulates files is globally referred to as its file system. Most computers have at least one file system. Some computers allow the use of several different file systems. For instance, on newer MS Windows computers, the older FAT-type file systems of MS-DOS and old versions of Windows are supported, in addition to the NTFS file system that is the normal file system for recent versions of Windows. Each system has its own advantages and disadvantages. Standard FAT allows only eight-character file names (plus a three-character extension) with no spaces, for example, whereas NTFS allows much longer names that can contain spaces. You can call a file "Payroll records" in NTFS, but in FAT you would be restricted to something like payroll.dat (unless you were using VFAT, a FAT extension allowing long file names).
File manager programs are utility programs that allow users to manipulate files directly. They allow you to move, create, delete and rename files and folders, although they do not actually allow you to read the contents of a file or store information in it. Every computer system provides at least one file-manager program for its native file system. For example, File Explorer (formerly Windows Explorer) is commonly used in Microsoft Windows operating systems, and Nautilus is common under several distributions of Linux. | [
{
"paragraph_id": 0,
"text": "In computing, a computer file is a resource for recording data on a computer storage device, primarily identified by its filename. Just as words can be written on paper, so can data be written to a computer file. Files can be shared with and transferred between computers and mobile devices via removable media, networks, or the Internet.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Different types of computer files are designed for different purposes. A file may be designed to store an image, a written message, a video, a program, or any wide variety of other kinds of data. Certain files can store multiple data types at once.",
"title": ""
},
{
"paragraph_id": 2,
"text": "By using computer programs, a person can open, read, change, save, and close a computer file. Computer files may be reopened, modified, and copied an arbitrary number of times.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Files are typically organized in a file system, which tracks file locations on the disk and enables user access.",
"title": ""
},
{
"paragraph_id": 4,
"text": "The word \"file\" derives from the Latin filum (\"a thread, string\").",
"title": "Etymology"
},
{
"paragraph_id": 5,
"text": "\"File\" was used in the context of computer storage as early as January 1940. In Punched Card Methods in Scientific Computation, W. J. Eckert stated, \"The first extensive use of the early Hollerith Tabulator in astronomy was made by Comrie. He used it for building a table from successive differences, and for adding large numbers of harmonic terms\". \"Tables of functions are constructed from their differences with great efficiency, either as printed tables or as a file of punched cards.\"",
"title": "Etymology"
},
{
"paragraph_id": 6,
"text": "In February 1950, in a Radio Corporation of America (RCA) advertisement in Popular Science magazine describing a new \"memory\" vacuum tube it had developed, RCA stated: \"the results of countless computations can be kept 'on file' and taken out again. Such a 'file' now exists in a 'memory' tube developed at RCA Laboratories. Electronically it retains figures fed into calculating machines, holds them in storage while it memorizes new ones – speeds intelligent solutions through mazes of mathematics.\"",
"title": "Etymology"
},
{
"paragraph_id": 7,
"text": "In 1952, \"file\" denoted, among other things, information stored on punched cards.",
"title": "Etymology"
},
{
"paragraph_id": 8,
"text": "In early use, the underlying hardware, rather than the contents stored on it, was denominated a \"file\". For example, the IBM 350 disk drives were denominated \"disk files\". The introduction, c. 1961, by the Burroughs MCP and the MIT Compatible Time-Sharing System of the concept of a \"file system\" that managed several virtual \"files\" on one storage device is the origin of the contemporary denotation of the word. Although the contemporary \"register file\" demonstrates the early concept of files, its use has greatly decreased.",
"title": "Etymology"
},
{
"paragraph_id": 9,
"text": "On most modern operating systems, files are organized into one-dimensional arrays of bytes. The format of a file is defined by its content since a file is solely a container for data.",
"title": "File contents"
},
{
"paragraph_id": 10,
"text": "On some platforms the format is indicated by its filename extension, specifying the rules for how the bytes must be organized and interpreted meaningfully. For example, the bytes of a plain text file (.txt in Windows) are associated with either ASCII or UTF-8 characters, while the bytes of image, video, and audio files are interpreted otherwise. Most file types also allocate a few bytes for metadata, which allows a file to carry some basic information about itself.",
"title": "File contents"
},
{
"paragraph_id": 11,
"text": "Some file systems can store arbitrary (not interpreted by the file system) file-specific data outside of the file format, but linked to the file, for example extended attributes or forks. On other file systems this can be done via sidecar files or software-specific databases. All those methods, however, are more susceptible to loss of metadata than container and archive file formats.",
"title": "File contents"
},
{
"paragraph_id": 12,
"text": "At any instant in time, a file has a specific size, normally expressed as a number of bytes, that indicates how much storage is occupied by the file. In most modern operating systems the size can be any non-negative whole number of bytes up to a system limit. Many older operating systems kept track only of the number of blocks or tracks occupied by a file on a physical storage device. In such systems, software employed other methods to track the exact byte count (e.g., CP/M used a special control character, Ctrl-Z, to signal the end of text files).",
"title": "File contents"
},
{
"paragraph_id": 13,
"text": "The general definition of a file does not require that its size have any real meaning, however, unless the data within the file happens to correspond to data within a pool of persistent storage. A special case is a zero byte file; these files can be newly created files that have not yet had any data written to them, or may serve as some kind of flag in the file system, or are accidents (the results of aborted disk operations). For example, the file to which the link /bin/ls points in a typical Unix-like system probably has a defined size that seldom changes. Compare this with /dev/null which is also a file, but as a character special file, its size is not meaningful.",
"title": "File contents"
},
{
"paragraph_id": 14,
"text": "Information in a computer file can consist of smaller packets of information (often called \"records\" or \"lines\") that are individually different but share some common traits. For example, a payroll file might contain information concerning all the employees in a company and their payroll details; each record in the payroll file concerns just one employee, and all the records have the common trait of being related to payroll—this is very similar to placing all payroll information into a specific filing cabinet in an office that does not have a computer. A text file may contain lines of text, corresponding to printed lines on a piece of paper. Alternatively, a file may contain an arbitrary binary image (a blob) or it may contain an executable.",
"title": "File contents"
},
{
"paragraph_id": 15,
"text": "The way information is grouped into a file is entirely up to how it is designed. This has led to a plethora of more or less standardized file structures for all imaginable purposes, from the simplest to the most complex. Most computer files are used by computer programs which create, modify or delete the files for their own use on an as-needed basis. The programmers who create the programs decide what files are needed, how they are to be used and (often) their names.",
"title": "File contents"
},
{
"paragraph_id": 16,
"text": "In some cases, computer programs manipulate files that are made visible to the computer user. For example, in a word-processing program, the user manipulates document files that the user personally names. Although the content of the document file is arranged in a format that the word-processing program understands, the user is able to choose the name and location of the file and provide the bulk of the information (such as words and text) that will be stored in the file.",
"title": "File contents"
},
{
"paragraph_id": 17,
"text": "Many applications pack all their data files into a single file called an archive file, using internal markers to discern the different types of information contained within. The benefits of the archive file are to lower the number of files for easier transfer, to reduce storage usage, or just to organize outdated files. The archive file must often be unpacked before next using.",
"title": "File contents"
},
{
"paragraph_id": 18,
"text": "The most basic operations that programs can perform on a file are:",
"title": "File contents"
},
{
"paragraph_id": 19,
"text": "Files on a computer can be created, moved, modified, grown, shrunk (truncated), and deleted. In most cases, computer programs that are executed on the computer handle these operations, but the user of a computer can also manipulate files if necessary. For instance, Microsoft Word files are normally created and modified by the Microsoft Word program in response to user commands, but the user can also move, rename, or delete these files directly by using a file manager program such as Windows Explorer (on Windows computers) or by command lines (CLI).",
"title": "File contents"
},
{
"paragraph_id": 20,
"text": "In Unix-like systems, user space programs do not operate directly, at a low level, on a file. Only the kernel deals with files, and it handles all user-space interaction with files in a manner that is transparent to the user-space programs. The operating system provides a level of abstraction, which means that interaction with a file from user-space is simply through its filename (instead of its inode). For example, rm filename will not delete the file itself, but only a link to the file. There can be many links to a file, but when they are all removed, the kernel considers that file's memory space free to be reallocated. This free space is commonly considered a security risk (due to the existence of file recovery software). Any secure-deletion program uses kernel-space (system) functions to wipe the file's data.",
"title": "File contents"
},
{
"paragraph_id": 21,
"text": "File moves within a file system complete almost immediately because the data content does not need to be rewritten. Only the paths need to be changed.",
"title": "File contents"
},
{
"paragraph_id": 22,
"text": "There are two distinct implementations of file moves.",
"title": "File contents"
},
{
"paragraph_id": 23,
"text": "When moving files between devices or partitions, some file managing software deletes each selected file from the source directory individually after being transferred, while other software deletes all files at once only after every file has been transferred.",
"title": "File contents"
},
{
"paragraph_id": 24,
"text": "With the mv command for instance, the former method is used when selecting files individually, possibly with the use of wildcards (example: mv -n sourcePath/* targetPath, while the latter method is used when selecting entire directories (example: mv -n sourcePath targetPath). Microsoft Windows Explorer uses the former method for mass storage file moves, but the latter method using Media Transfer Protocol, as described in Media Transfer Protocol § File move behavior.",
"title": "File contents"
},
{
"paragraph_id": 25,
"text": "The former method (individual deletion from source) has the benefit that space is released from the source device or partition imminently after the transfer has begun, meaning after the first file is finished. With the latter method, space is only freed after the transfer of the entire selection has finished.",
"title": "File contents"
},
{
"paragraph_id": 26,
"text": "If an incomplete file transfer with the latter method is aborted unexpectedly, perhaps due to an unexpected power-off, system halt or disconnection of a device, no space will have been freed up on the source device or partition. The user would need to merge the remaining files from the source, including the incompletely written (truncated) last file.",
"title": "File contents"
},
{
"paragraph_id": 27,
"text": "With the individual deletion method, the file moving software also does not need to cumulatively keep track of all files finished transferring for the case that a user manually aborts the file transfer. A file manager using the latter (afterwards deletion) method will have to only delete the files from the source directory that have already finished transferring.",
"title": "File contents"
},
{
"paragraph_id": 28,
"text": "In modern computer systems, files are typically accessed using names (filenames). In some operating systems, the name is associated with the file itself. In others, the file is anonymous, and is pointed to by links that have names. In the latter case, a user can identify the name of the link with the file itself, but this is a false analogue, especially where there exists more than one link to the same file.",
"title": "Identifying and organizing"
},
{
"paragraph_id": 29,
"text": "Files (or links to files) can be located in directories. However, more generally, a directory can contain either a list of files or a list of links to files. Within this definition, it is of paramount importance that the term \"file\" includes directories. This permits the existence of directory hierarchies, i.e., directories containing sub-directories. A name that refers to a file within a directory must be typically unique. In other words, there must be no identical names within a directory. However, in some operating systems, a name may include a specification of type that means a directory can contain an identical name for more than one type of object such as a directory and a file.",
"title": "Identifying and organizing"
},
{
"paragraph_id": 30,
"text": "In environments in which a file is named, a file's name and the path to the file's directory must uniquely identify it among all other files in the computer system—no two files can have the same name and path. Where a file is anonymous, named references to it will exist within a namespace. In most cases, any name within the namespace will refer to exactly zero or one file. However, any file may be represented within any namespace by zero, one or more names.",
"title": "Identifying and organizing"
},
{
"paragraph_id": 31,
"text": "Any string of characters may be a well-formed name for a file or a link depending upon the context of application. Whether or not a name is well-formed depends on the type of computer system being used. Early computers permitted only a few letters or digits in the name of a file, but modern computers allow long names (some up to 255 characters) containing almost any combination of unicode letters or unicode digits, making it easier to understand the purpose of a file at a glance. Some computer systems allow file names to contain spaces; others do not. Case-sensitivity of file names is determined by the file system. Unix file systems are usually case sensitive and allow user-level applications to create files whose names differ only in the case of characters. Microsoft Windows supports multiple file systems, each with different policies regarding case-sensitivity. The common FAT file system can have multiple files whose names differ only in case if the user uses a disk editor to edit the file names in the directory entries. User applications, however, will usually not allow the user to create multiple files with the same name but differing in case.",
"title": "Identifying and organizing"
},
{
"paragraph_id": 32,
"text": "Most computers organize files into hierarchies using folders, directories, or catalogs. The concept is the same irrespective of the terminology used. Each folder can contain an arbitrary number of files, and it can also contain other folders. These other folders are referred to as subfolders. Subfolders can contain still more files and folders and so on, thus building a tree-like structure in which one \"master folder\" (or \"root folder\" — the name varies from one operating system to another) can contain any number of levels of other folders and files. Folders can be named just as files can (except for the root folder, which often does not have a name). The use of folders makes it easier to organize files in a logical way.",
"title": "Identifying and organizing"
},
{
"paragraph_id": 33,
"text": "When a computer allows the use of folders, each file and folder has not only a name of its own, but also a path, which identifies the folder or folders in which a file or folder resides. In the path, some sort of special character—such as a slash—is used to separate the file and folder names. For example, in the illustration shown in this article, the path /Payroll/Salaries/Managers uniquely identifies a file called Managers in a folder called Salaries, which in turn is contained in a folder called Payroll. The folder and file names are separated by slashes in this example; the topmost or root folder has no name, and so the path begins with a slash (if the root folder had a name, it would precede this first slash).",
"title": "Identifying and organizing"
},
{
"paragraph_id": 34,
"text": "Many computer systems use extensions in file names to help identify what they contain, also known as the file type. On Windows computers, extensions consist of a dot (period) at the end of a file name, followed by a few letters to identify the type of file. An extension of .txt identifies a text file; a .doc extension identifies any type of document or documentation, commonly in the Microsoft Word file format; and so on. Even when extensions are used in a computer system, the degree to which the computer system recognizes and heeds them can vary; in some systems, they are required, while in other systems, they are completely ignored if they are presented.",
"title": "Identifying and organizing"
},
{
"paragraph_id": 35,
"text": "Many modern computer systems provide methods for protecting files against accidental and deliberate damage. Computers that allow for multiple users implement file permissions to control who may or may not modify, delete, or create files and folders. For example, a given user may be granted only permission to read a file or folder, but not to modify or delete it; or a user may be given permission to read and modify files or folders, but not to execute them. Permissions may also be used to allow only certain users to see the contents of a file or folder. Permissions protect against unauthorized tampering or destruction of information in files, and keep private information confidential from unauthorized users.",
"title": "Protection"
},
{
"paragraph_id": 36,
"text": "Another protection mechanism implemented in many computers is a read-only flag. When this flag is turned on for a file (which can be accomplished by a computer program or by a human user), the file can be examined, but it cannot be modified. This flag is useful for critical information that must not be modified or erased, such as special files that are used only by internal parts of the computer system. Some systems also include a hidden flag to make certain files invisible; this flag is used by the computer system to hide essential system files that users should not alter.",
"title": "Protection"
},
{
"paragraph_id": 37,
"text": "Any file that has any useful purpose must have some physical manifestation. That is, a file (an abstract concept) in a real computer system must have a real physical analogue if it is to exist at all.",
"title": "Storage"
},
{
"paragraph_id": 38,
"text": "In physical terms, most computer files are stored on some type of data storage device. For example, most operating systems store files on a hard disk. Hard disks have been the ubiquitous form of non-volatile storage since the early 1960s. Where files contain only temporary information, they may be stored in RAM. Computer files can be also stored on other media in some cases, such as magnetic tapes, compact discs, Digital Versatile Discs, Zip drives, USB flash drives, etc. The use of solid state drives is also beginning to rival the hard disk drive.",
"title": "Storage"
},
{
"paragraph_id": 39,
"text": "In Unix-like operating systems, many files have no associated physical storage device. Examples are /dev/null and most files under directories /dev, /proc and /sys. These are virtual files: they exist as objects within the operating system kernel.",
"title": "Storage"
},
{
"paragraph_id": 40,
"text": "As seen by a running user program, files are usually represented either by a file control block or by a file handle. A file control block (FCB) is an area of memory which is manipulated to establish a filename etc. and then passed to the operating system as a parameter; it was used by older IBM operating systems and early PC operating systems including CP/M and early versions of MS-DOS. A file handle is generally either an opaque data type or an integer; it was introduced in around 1961 by the ALGOL-based Burroughs MCP running on the Burroughs B5000 but is now ubiquitous.",
"title": "Storage"
},
{
"paragraph_id": 41,
"text": "When a file is said to be corrupted, it is because its contents have been saved to the computer in such a way that they cannot be properly read, either by a human or by software. Depending on the extent of the damage, the original file can sometimes be recovered, or at least partially understood. A file may be created corrupt, or it may be corrupted at a later point through overwriting.",
"title": "File corruption"
},
{
"paragraph_id": 42,
"text": "There are many ways by which a file can become corrupted. Most commonly, the issue happens in the process of writing the file to a disk. For example, if an image-editing program unexpectedly crashes while saving an image, that file may be corrupted because the program could not save its entirety. The program itself might warn the user that there was an error, allowing for another attempt at saving the file. Some other examples of reasons for which files become corrupted include:",
"title": "File corruption"
},
{
"paragraph_id": 43,
"text": "Although file corruption usually happens accidentally, it may also be done on purpose as a mean of procrastination, as to fool someone else into thinking an assignment was ready at an earlier date, potentially gaining time to finish said assignment. There are services that provide on demand file corruption, which essentially fill a given file with random data so that it cannot be opened or read, yet still seems legitimate.",
"title": "File corruption"
},
{
"paragraph_id": 44,
"text": "One of the most effective countermeasures for unintentional file corruption is backing up important files. In the event of an important file becoming corrupted, the user can simply replace it with the backed up version.",
"title": "File corruption"
},
{
"paragraph_id": 45,
"text": "When computer files contain information that is extremely important, a back-up process is used to protect against disasters that might destroy the files. Backing up files simply means making copies of the files in a separate location so that they can be restored if something happens to the computer, or if they are deleted accidentally.",
"title": "Backup"
},
{
"paragraph_id": 46,
"text": "There are many ways to back up files. Most computer systems provide utility programs to assist in the back-up process, which can become very time-consuming if there are many files to safeguard. Files are often copied to removable media such as writable CDs or cartridge tapes. Copying files to another hard disk in the same computer protects against failure of one disk, but if it is necessary to protect against failure or destruction of the entire computer, then copies of the files must be made on other media that can be taken away from the computer and stored in a safe, distant location.",
"title": "Backup"
},
{
"paragraph_id": 47,
"text": "The grandfather-father-son backup method automatically makes three back-ups; the grandfather file is the oldest copy of the file and the son is the current copy.",
"title": "Backup"
},
{
"paragraph_id": 48,
"text": "The way a computer organizes, names, stores and manipulates files is globally referred to as its file system. Most computers have at least one file system. Some computers allow the use of several different file systems. For instance, on newer MS Windows computers, the older FAT-type file systems of MS-DOS and old versions of Windows are supported, in addition to the NTFS file system that is the normal file system for recent versions of Windows. Each system has its own advantages and disadvantages. Standard FAT allows only eight-character file names (plus a three-character extension) with no spaces, for example, whereas NTFS allows much longer names that can contain spaces. You can call a file \"Payroll records\" in NTFS, but in FAT you would be restricted to something like payroll.dat (unless you were using VFAT, a FAT extension allowing long file names).",
"title": "File systems and file managers"
},
{
"paragraph_id": 49,
"text": "File manager programs are utility programs that allow users to manipulate files directly. They allow you to move, create, delete and rename files and folders, although they do not actually allow you to read the contents of a file or store information in it. Every computer system provides at least one file-manager program for its native file system. For example, File Explorer (formerly Windows Explorer) is commonly used in Microsoft Windows operating systems, and Nautilus is common under several distributions of Linux.",
"title": "File systems and file managers"
}
] | In computing, a computer file is a resource for recording data on a computer storage device, primarily identified by its filename. Just as words can be written on paper, so can data be written to a computer file. Files can be shared with and transferred between computers and mobile devices via removable media, networks, or the Internet. Different types of computer files are designed for different purposes. A file may be designed to store an image, a written message, a video, a program, or any wide variety of other kinds of data. Certain files can store multiple data types at once. By using computer programs, a person can open, read, change, save, and close a computer file. Computer files may be reopened, modified, and copied an arbitrary number of times. Files are typically organized in a file system, which tracks file locations on the disk and enables user access. | 2001-11-11T16:11:35Z | 2023-12-13T21:22:59Z | [
"Template:Short description",
"Template:Misleading",
"Template:Main article",
"Template:Webarchive",
"Template:Section link",
"Template:Which",
"Template:Main",
"Template:Commons category-inline",
"Template:Computer files",
"Template:About",
"Template:Multiple image",
"Template:Cite book",
"Template:Cite journal",
"Template:Citation",
"Template:Cite news",
"Template:Curlie",
"Template:Refimprove",
"Template:Circa",
"Template:Mono",
"Template:Further",
"Template:Reflist",
"Template:Cite web"
] | https://en.wikipedia.org/wiki/Computer_file |
7,079 | CID | CID may refer to: | [
{
"paragraph_id": 0,
"text": "CID may refer to:",
"title": ""
}
] | CID may refer to: | 2023-06-02T13:36:19Z | [
"Template:Wiktionary",
"Template:Tocright",
"Template:Lang",
"Template:Disambiguation"
] | https://en.wikipedia.org/wiki/CID |
|
7,080 | Christian Doppler | Christian Andreas Doppler (/ˈdɒplər/ ; 29 November 1803 – 17 March 1853) was an Austrian mathematician and physicist. He formulated the principle – now known as the Doppler effect – that the observed frequency of a wave depends on the relative speed of the source and the observer.
Doppler was born in Salzburg (today Austria) in 1803. Doppler was the second son born to Johann Evangelist Doppler and Theresia Seeleuthner (Doppler). Doppler's father, Johann Doppler, was a third-generation stone mason in Salzburg. As a young boy, Doppler showed promise for his family's trade. However, due to his weak health, Doppler's father encouraged him instead to pursue a career in business. Doppler started elementary education at the age of 13. After completion, he moved on to secondary education at a school in Linz. Doppler's proficiency in mathematics was discovered by Sion Stampfer, a mathematician in Salzburg. Upon the recommendation of Sion Stampfer, Doppler took a break from high school to attend the Polytechnic Institute in Vienna in 1822. Doppler returned to Salzburg in 1825 to finish his secondary education. After completing high school, Doppler studied philosophy in Salzburg and mathematics and physics at the University of Vienna and Imperial–Royal Polytechnic Institute (now TU Wien). In 1829, he was chosen for an assistant position to Professor Adam Von Burg at the Polytechnic Institute of Vienna, where he continued his studies. In 1835, he decided to immigrate to the United States to pursue a position in academia. Before departing for the United States, Doppler was offered a teaching position at a state-operated high school in Prague, which convinced him to stay in Europe. Shortly after, in 1837 he was appointed as an associate professor of math and geometry at the Prague Polytechnic Institute (now Czech Technical University in Prague). He received a full professorship position in 1841.
In 1836, Doppler married Mathilde Sturm, the daughter of goldsmith Franz Sturm. Doppler and Mathilde had five children together. Their first child was Mathilde Doppler who was born in 1837. Doppler's second child, Ludwig Doppler was born in 1838. Two years later, in 1840 Adolf Doppler was born. Doppler's fourth child, Bertha Doppler was born in 1843. Their last child Hermann was born in 1845.
In 1842, at the age of 38, Doppler gave a lecture to the Royal Bohemian Society of Sciences and subsequently published Über das farbige Licht der Doppelsterne und einiger anderer Gestirne des Himmels ("On the coloured light of the binary stars and some other stars of the heavens"). In this work, Doppler postulated his principle (later named the Doppler effect) that the observed frequency of a wave depends on the relative speed of the source and the observer, and he later tried to use this concept to explain the visible colours of binary stars (this hypothesis was later proven wrong). Doppler also incorrectly believed that if a star were to exceed 136,000 kilometers per second in radial velocity, then it would not be visible to the human eye.
Doppler continued working as a professor at the Prague Polytechnic, publishing over 50 articles on mathematics, physics and astronomy, but in 1847 he left Prague for the professorship of mathematics, physics, and mechanics at the Academy of Mines and Forests (its successor is the University of Miskolc) in Selmecbánya (then Kingdom of Hungary, now Banská Štiavnica Slovakia).
Doppler's research was interrupted by the Hungarian Revolution of 1848. In 1849, he fled to Vienna and in 1850 was appointed head of the Institute for Experimental Physics at the University of Vienna. While there, Doppler, along with Franz Unger, influenced the development of young Gregor Mendel, the founding father of genetics, who was a student at the University of Vienna from 1851 to 1853.
Doppler died on 17 March 1853 at age 49 from a pulmonary disease in Venice (at that time part of the Austrian Empire). His tomb is in the San Michele cemetery on the Venetian island of San Michele.
Some confusion exists about Doppler's full name. Doppler referred to himself as Christian Doppler. The records of his birth and baptism stated Christian Andreas Doppler. Doppler's middle name is shared by his great-great-grandfather Andreas Doppler. Forty years after Doppler's death the misnomer Johann Christian Doppler was introduced by the astronomer Julius Scheiner. Scheiner's mistake has since been copied by many. | [
{
"paragraph_id": 0,
"text": "Christian Andreas Doppler (/ˈdɒplər/ ; 29 November 1803 – 17 March 1853) was an Austrian mathematician and physicist. He formulated the principle – now known as the Doppler effect – that the observed frequency of a wave depends on the relative speed of the source and the observer.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Doppler was born in Salzburg (today Austria) in 1803. Doppler was the second son born to Johann Evangelist Doppler and Theresia Seeleuthner (Doppler). Doppler's father, Johann Doppler, was a third-generation stone mason in Salzburg. As a young boy, Doppler showed promise for his family's trade. However, due to his weak health, Doppler's father encouraged him instead to pursue a career in business. Doppler started elementary education at the age of 13. After completion, he moved on to secondary education at a school in Linz. Doppler's proficiency in mathematics was discovered by Sion Stampfer, a mathematician in Salzburg. Upon the recommendation of Sion Stampfer, Doppler took a break from high school to attend the Polytechnic Institute in Vienna in 1822. Doppler returned to Salzburg in 1825 to finish his secondary education. After completing high school, Doppler studied philosophy in Salzburg and mathematics and physics at the University of Vienna and Imperial–Royal Polytechnic Institute (now TU Wien). In 1829, he was chosen for an assistant position to Professor Adam Von Burg at the Polytechnic Institute of Vienna, where he continued his studies. In 1835, he decided to immigrate to the United States to pursue a position in academia. Before departing for the United States, Doppler was offered a teaching position at a state-operated high school in Prague, which convinced him to stay in Europe. Shortly after, in 1837 he was appointed as an associate professor of math and geometry at the Prague Polytechnic Institute (now Czech Technical University in Prague). He received a full professorship position in 1841.",
"title": "Biography"
},
{
"paragraph_id": 2,
"text": "In 1836, Doppler married Mathilde Sturm, the daughter of goldsmith Franz Sturm. Doppler and Mathilde had five children together. Their first child was Mathilde Doppler who was born in 1837. Doppler's second child, Ludwig Doppler was born in 1838. Two years later, in 1840 Adolf Doppler was born. Doppler's fourth child, Bertha Doppler was born in 1843. Their last child Hermann was born in 1845.",
"title": "Biography"
},
{
"paragraph_id": 3,
"text": "In 1842, at the age of 38, Doppler gave a lecture to the Royal Bohemian Society of Sciences and subsequently published Über das farbige Licht der Doppelsterne und einiger anderer Gestirne des Himmels (\"On the coloured light of the binary stars and some other stars of the heavens\"). In this work, Doppler postulated his principle (later named the Doppler effect) that the observed frequency of a wave depends on the relative speed of the source and the observer, and he later tried to use this concept to explain the visible colours of binary stars (this hypothesis was later proven wrong). Doppler also incorrectly believed that if a star were to exceed 136,000 kilometers per second in radial velocity, then it would not be visible to the human eye.",
"title": "Biography"
},
{
"paragraph_id": 4,
"text": "Doppler continued working as a professor at the Prague Polytechnic, publishing over 50 articles on mathematics, physics and astronomy, but in 1847 he left Prague for the professorship of mathematics, physics, and mechanics at the Academy of Mines and Forests (its successor is the University of Miskolc) in Selmecbánya (then Kingdom of Hungary, now Banská Štiavnica Slovakia).",
"title": "Biography"
},
{
"paragraph_id": 5,
"text": "Doppler's research was interrupted by the Hungarian Revolution of 1848. In 1849, he fled to Vienna and in 1850 was appointed head of the Institute for Experimental Physics at the University of Vienna. While there, Doppler, along with Franz Unger, influenced the development of young Gregor Mendel, the founding father of genetics, who was a student at the University of Vienna from 1851 to 1853.",
"title": "Biography"
},
{
"paragraph_id": 6,
"text": "Doppler died on 17 March 1853 at age 49 from a pulmonary disease in Venice (at that time part of the Austrian Empire). His tomb is in the San Michele cemetery on the Venetian island of San Michele.",
"title": "Biography"
},
{
"paragraph_id": 7,
"text": "Some confusion exists about Doppler's full name. Doppler referred to himself as Christian Doppler. The records of his birth and baptism stated Christian Andreas Doppler. Doppler's middle name is shared by his great-great-grandfather Andreas Doppler. Forty years after Doppler's death the misnomer Johann Christian Doppler was introduced by the astronomer Julius Scheiner. Scheiner's mistake has since been copied by many.",
"title": "Full name"
}
] | Christian Andreas Doppler was an Austrian mathematician and physicist. He formulated the principle – now known as the Doppler effect – that the observed frequency of a wave depends on the relative speed of the source and the observer. | 2001-11-11T19:08:50Z | 2023-11-29T17:43:37Z | [
"Template:MacTutor Biography",
"Template:Cite journal",
"Template:Use dmy dates",
"Template:IPAc-en",
"Template:Cite book",
"Template:Authority control",
"Template:Short description",
"Template:Infobox scientist",
"Template:Cite web",
"Template:Commons category",
"Template:Wikiquote",
"Template:ISBN",
"Template:Reflist",
"Template:Cite encyclopedia"
] | https://en.wikipedia.org/wiki/Christian_Doppler |
7,081 | Clerihew | A clerihew (/ˈklɛrɪhjuː/) is a whimsical, four-line biographical poem of a type invented by Edmund Clerihew Bentley. The first line is the name of the poem's subject, usually a famous person, and the remainder puts the subject in an absurd light or reveals something unknown or spurious about the subject. The rhyme scheme is AABB, and the rhymes are often forced. The line length and metre are irregular. Bentley invented the clerihew in school and then popularized it in books. One of his best known is this (1905):
Sir Christopher Wren Said, "I am going to dine with some men. If anyone calls Say I am designing St Paul's."
A clerihew has the following properties:
Clerihews are not satirical or abusive, but they target famous individuals and reposition them in an absurd, anachronistic or commonplace setting, often giving them an over-simplified and slightly garbled description.
The form was invented by and is named after Edmund Clerihew Bentley. When he was a 16-year-old pupil at St Paul's School in London, the lines of his first clerihew, about Humphry Davy, came into his head during a science class. Together with his schoolfriends, he filled a notebook with examples. The first known use of the word in print dates from 1928. Bentley published three volumes of his own clerihews: Biography for Beginners (1905), published as "edited by E. Clerihew"; More Biography (1929); and Baseless Biography (1939), a compilation of clerihews originally published in Punch illustrated by the author's son Nicolas Bentley.
G. K. Chesterton, a friend of Bentley, was also a practitioner of the clerihew and one of the sources of its popularity. Chesterton provided verses and illustrations for the original schoolboy notebook and illustrated Biography for Beginners. Other serious authors also produced clerihews, including W. H. Auden, and it remains a popular humorous form among other writers and the general public. Among contemporary writers, the satirist Craig Brown has made considerable use of the clerihew in his columns for The Daily Telegraph.
Bentley's first clerihew, published in 1905, was written about Sir Humphry Davy:
Sir Humphry Davy Abominated gravy. He lived in the odium Of having discovered sodium.
The original poem had the second line "Was not fond of gravy"; but the published version has "Abominated gravy".
Other clerihews by Bentley include:
George the Third Ought never to have occurred. One can only wonder At so grotesque a blunder.
and
John Stuart Mill, By a mighty effort of will, Overcame his natural bonhomie And wrote Principles of Political Economy.
W. H. Auden's Academic Graffiti (1971) includes:
Sir Henry Rider Haggard Was completely staggered When his bride-to-be Announced, "I am She!"
Satirical magazine Private Eye noted Auden's work and responded:
W. H. Auden Suffers from acute boredom But for his readers he's got some merry news He's written a collection of rather bad clerihews.
A second stanza aimed a jibe at Auden's publisher, Faber and Faber.
Alan Turing, one of the founders of computing, was the subject of a clerihew written by the pupils of his alma mater, Sherborne School in England:
Turing Must have been alluring To get made a don So early on.
A clerihew appreciated by chemists is cited in Dark Sun by Richard Rhodes, and regards the inventor of the thermos bottle (or Dewar flask):
Sir James Dewar Is a better man than you are None of you asses Can liquefy gases.
Dark Sun also features a clerihew about the German-British physicist and Soviet nuclear spy Klaus Fuchs:
Fuchs Looks Like an ascetic Theoretic
In 1983, Games magazine ran a contest titled "Do You Clerihew?" The winning entry was:
Did Descartes Depart With the thought "Therefore I'm not"?
The clerihew form has also occasionally been used for non-biographical verses. Bentley opened his 1905 Biography for Beginners with an example, entitled "Introductory Remarks", on the theme of biography itself:
The Art of Biography Is different from Geography. Geography is about Maps, But Biography is about Chaps.
The third edition of the same work, published in 1925, includes a "Preface to the New Edition" in 11 stanzas, each in clerihew form. One stanza runs:
On biographic style (Formerly so vile) The book has had an effect Greater than I could reasonably expect. | [
{
"paragraph_id": 0,
"text": "A clerihew (/ˈklɛrɪhjuː/) is a whimsical, four-line biographical poem of a type invented by Edmund Clerihew Bentley. The first line is the name of the poem's subject, usually a famous person, and the remainder puts the subject in an absurd light or reveals something unknown or spurious about the subject. The rhyme scheme is AABB, and the rhymes are often forced. The line length and metre are irregular. Bentley invented the clerihew in school and then popularized it in books. One of his best known is this (1905):",
"title": ""
},
{
"paragraph_id": 1,
"text": "Sir Christopher Wren Said, \"I am going to dine with some men. If anyone calls Say I am designing St Paul's.\"",
"title": ""
},
{
"paragraph_id": 2,
"text": "A clerihew has the following properties:",
"title": "Form"
},
{
"paragraph_id": 3,
"text": "Clerihews are not satirical or abusive, but they target famous individuals and reposition them in an absurd, anachronistic or commonplace setting, often giving them an over-simplified and slightly garbled description.",
"title": "Form"
},
{
"paragraph_id": 4,
"text": "The form was invented by and is named after Edmund Clerihew Bentley. When he was a 16-year-old pupil at St Paul's School in London, the lines of his first clerihew, about Humphry Davy, came into his head during a science class. Together with his schoolfriends, he filled a notebook with examples. The first known use of the word in print dates from 1928. Bentley published three volumes of his own clerihews: Biography for Beginners (1905), published as \"edited by E. Clerihew\"; More Biography (1929); and Baseless Biography (1939), a compilation of clerihews originally published in Punch illustrated by the author's son Nicolas Bentley.",
"title": "Practitioners"
},
{
"paragraph_id": 5,
"text": "G. K. Chesterton, a friend of Bentley, was also a practitioner of the clerihew and one of the sources of its popularity. Chesterton provided verses and illustrations for the original schoolboy notebook and illustrated Biography for Beginners. Other serious authors also produced clerihews, including W. H. Auden, and it remains a popular humorous form among other writers and the general public. Among contemporary writers, the satirist Craig Brown has made considerable use of the clerihew in his columns for The Daily Telegraph.",
"title": "Practitioners"
},
{
"paragraph_id": 6,
"text": "Bentley's first clerihew, published in 1905, was written about Sir Humphry Davy:",
"title": "Examples"
},
{
"paragraph_id": 7,
"text": "Sir Humphry Davy Abominated gravy. He lived in the odium Of having discovered sodium.",
"title": "Examples"
},
{
"paragraph_id": 8,
"text": "The original poem had the second line \"Was not fond of gravy\"; but the published version has \"Abominated gravy\".",
"title": "Examples"
},
{
"paragraph_id": 9,
"text": "Other clerihews by Bentley include:",
"title": "Examples"
},
{
"paragraph_id": 10,
"text": "George the Third Ought never to have occurred. One can only wonder At so grotesque a blunder.",
"title": "Examples"
},
{
"paragraph_id": 11,
"text": "and",
"title": "Examples"
},
{
"paragraph_id": 12,
"text": "John Stuart Mill, By a mighty effort of will, Overcame his natural bonhomie And wrote Principles of Political Economy.",
"title": "Examples"
},
{
"paragraph_id": 13,
"text": "W. H. Auden's Academic Graffiti (1971) includes:",
"title": "Examples"
},
{
"paragraph_id": 14,
"text": "Sir Henry Rider Haggard Was completely staggered When his bride-to-be Announced, \"I am She!\"",
"title": "Examples"
},
{
"paragraph_id": 15,
"text": "Satirical magazine Private Eye noted Auden's work and responded:",
"title": "Examples"
},
{
"paragraph_id": 16,
"text": "W. H. Auden Suffers from acute boredom But for his readers he's got some merry news He's written a collection of rather bad clerihews.",
"title": "Examples"
},
{
"paragraph_id": 17,
"text": "A second stanza aimed a jibe at Auden's publisher, Faber and Faber.",
"title": "Examples"
},
{
"paragraph_id": 18,
"text": "Alan Turing, one of the founders of computing, was the subject of a clerihew written by the pupils of his alma mater, Sherborne School in England:",
"title": "Examples"
},
{
"paragraph_id": 19,
"text": "Turing Must have been alluring To get made a don So early on.",
"title": "Examples"
},
{
"paragraph_id": 20,
"text": "A clerihew appreciated by chemists is cited in Dark Sun by Richard Rhodes, and regards the inventor of the thermos bottle (or Dewar flask):",
"title": "Examples"
},
{
"paragraph_id": 21,
"text": "Sir James Dewar Is a better man than you are None of you asses Can liquefy gases.",
"title": "Examples"
},
{
"paragraph_id": 22,
"text": "Dark Sun also features a clerihew about the German-British physicist and Soviet nuclear spy Klaus Fuchs:",
"title": "Examples"
},
{
"paragraph_id": 23,
"text": "Fuchs Looks Like an ascetic Theoretic",
"title": "Examples"
},
{
"paragraph_id": 24,
"text": "In 1983, Games magazine ran a contest titled \"Do You Clerihew?\" The winning entry was:",
"title": "Examples"
},
{
"paragraph_id": 25,
"text": "Did Descartes Depart With the thought \"Therefore I'm not\"?",
"title": "Examples"
},
{
"paragraph_id": 26,
"text": "The clerihew form has also occasionally been used for non-biographical verses. Bentley opened his 1905 Biography for Beginners with an example, entitled \"Introductory Remarks\", on the theme of biography itself:",
"title": "Other uses of the form"
},
{
"paragraph_id": 27,
"text": "The Art of Biography Is different from Geography. Geography is about Maps, But Biography is about Chaps.",
"title": "Other uses of the form"
},
{
"paragraph_id": 28,
"text": "The third edition of the same work, published in 1925, includes a \"Preface to the New Edition\" in 11 stanzas, each in clerihew form. One stanza runs:",
"title": "Other uses of the form"
},
{
"paragraph_id": 29,
"text": "On biographic style (Formerly so vile) The book has had an effect Greater than I could reasonably expect.",
"title": "Other uses of the form"
}
] | A clerihew is a whimsical, four-line biographical poem of a type invented by Edmund Clerihew Bentley. The first line is the name of the poem's subject, usually a famous person, and the remainder puts the subject in an absurd light or reveals something unknown or spurious about the subject. The rhyme scheme is AABB, and the rhymes are often forced. The line length and metre are irregular. Bentley invented the clerihew in school and then popularized it in books. One of his best known is this (1905): | 2001-11-12T05:10:48Z | 2023-08-08T20:37:02Z | [
"Template:Poemquote",
"Template:Reflist",
"Template:Cite web",
"Template:OED",
"Template:Wiktionary",
"Template:Edmund Clerihew Bentley",
"Template:Short description",
"Template:Cite book",
"Template:Authority control",
"Template:IPAc-en"
] | https://en.wikipedia.org/wiki/Clerihew |
7,085 | Civil war | A civil war is a war between organized groups within the same state (or country). The aim of one side may be to take control of the country or a region, to achieve independence for a region, or to change government policies. The term is a calque of Latin bellum civile which was used to refer to the various civil wars of the Roman Republic in the 1st century BC.
Most modern civil wars involve intervention by outside powers. According to Patrick M. Regan in his book Civil Wars and Foreign Powers (2000) about two thirds of the 138 intrastate conflicts between the end of World War II and 2000 saw international intervention.
A civil war is often a high-intensity conflict, often involving regular armed forces, that is sustained, organized and large-scale. Civil wars may result in large numbers of casualties and the consumption of significant resources.
Civil wars since the end of World War II have lasted on average just over four years, a dramatic rise from the one-and-a-half-year average of the 1900–1944 period. While the rate of emergence of new civil wars has been relatively steady since the mid-19th century, the increasing length of those wars has resulted in increasing numbers of wars ongoing at any one time. For example, there were no more than five civil wars underway simultaneously in the first half of the 20th century while there were over 20 concurrent civil wars close to the end of the Cold War. Since 1945, civil wars have resulted in the deaths of over 25 million people, as well as the forced displacement of millions more. Civil wars have further resulted in economic collapse; Somalia, Burma (Myanmar), Uganda and Angola are examples of nations that were considered to have had promising futures before being engulfed in civil wars.
James Fearon, a scholar of civil wars at Stanford University, defines a civil war as "a violent conflict within a country fought by organized groups that aim to take power at the center or in a region, or to change government policies". Ann Hironaka further specifies that one side of a civil war is the state. Stathis Kalyvas defines civil war as "armed combat taking place within the boundaries of a recognized sovereign entity between parties that are subject to a common authority at the outset of the hostilities." The intensity at which a civil disturbance becomes a civil war is contested by academics. Some political scientists define a civil war as having more than 1,000 casualties, while others further specify that at least 100 must come from each side. The Correlates of War, a dataset widely used by scholars of conflict, classifies civil wars as having over 1000 war-related casualties per year of conflict. This rate is a small fraction of the millions killed in the Second Sudanese Civil War and Cambodian Civil War, for example, but excludes several highly publicized conflicts, such as The Troubles of Northern Ireland and the struggle of the African National Congress in Apartheid-era South Africa.
Based on the 1,000-casualties-per-year criterion, there were 213 civil wars from 1816 to 1997, 104 of which occurred from 1944 to 1997. If one uses the less-stringent 1,000 casualties total criterion, there were over 90 civil wars between 1945 and 2007, with 20 ongoing civil wars as of 2007.
The Geneva Conventions do not specifically define the term "civil war"; nevertheless, they do outline the responsibilities of parties in "armed conflict not of an international character". This includes civil wars; however, no specific definition of civil war is provided in the text of the Conventions.
Nevertheless, the International Committee of the Red Cross has sought to provide some clarification through its commentaries on the Geneva Conventions, noting that the Conventions are "so general, so vague, that many of the delegations feared that it might be taken to cover any act committed by force of arms". Accordingly, the commentaries provide for different 'conditions' on which the application of the Geneva Convention would depend; the commentary, however, points out that these should not be interpreted as rigid conditions. The conditions listed by the ICRC in its commentary are as follows:
(b) That it has claimed for itself the rights of a belligerent; or
(c) That it has accorded the insurgents recognition as belligerents for the purposes only of the present Convention; or
(d) That the dispute has been admitted to the agenda of the Security Council or the General Assembly of the United Nations as being a threat to international peace, a breach of the peace, or an act of aggression.
(b) That the insurgent civil authority exercises de facto authority over the population within a determinate portion of the national territory.
(c) That the armed forces act under the direction of an organized authority and are prepared to observe the ordinary laws of war.
(d) That the insurgent civil authority agrees to be bound by the provisions of the Convention.
According to a 2017 review study of civil war research, there are three prominent explanations for civil war: greed-based explanations which center on individuals’ desire to maximize their profits, grievance-based explanations which center on conflict as a response to socioeconomic or political injustice, and opportunity-based explanations which center on factors that make it easier to engage in violent mobilization. According to the study, the most influential explanation for civil war onset is the opportunity-based explanation by James Fearon and David Laitin in their 2003 American Political Science Review article.
Scholars investigating the cause of civil war are attracted by two opposing theories, greed versus grievance. Roughly stated: are conflicts caused by differences of ethnicity, religion or other social affiliation, or do conflicts begin because it is in the economic best interests of individuals and groups to start them? Scholarly analysis supports the conclusion that economic and structural factors are more important than those of identity in predicting occurrences of civil war.
A comprehensive study of civil war was carried out by a team from the World Bank in the early 21st century. The study framework, which came to be called the Collier–Hoeffler Model, examined 78 five-year increments when civil war occurred from 1960 to 1999, as well as 1,167 five-year increments of "no civil war" for comparison, and subjected the data set to regression analysis to see the effect of various factors. The factors that were shown to have a statistically significant effect on the chance that a civil war would occur in any given five-year period were:
A high proportion of primary commodities in national exports significantly increases the risk of a conflict. A country at "peak danger", with commodities comprising 32% of gross domestic product, has a 22% risk of falling into civil war in a given five-year period, while a country with no primary commodity exports has a 1% risk. When disaggregated, only petroleum and non-petroleum groupings showed different results: a country with relatively low levels of dependence on petroleum exports is at slightly less risk, while a high level of dependence on oil as an export results in slightly more risk of a civil war than national dependence on another primary commodity. The authors of the study interpreted this as being the result of the ease by which primary commodities may be extorted or captured compared to other forms of wealth; for example, it is easy to capture and control the output of a gold mine or oil field compared to a sector of garment manufacturing or hospitality services.
A second source of finance is national diasporas, which can fund rebellions and insurgencies from abroad. The study found that statistically switching the size of a country's diaspora from the smallest found in the study to the largest resulted in a sixfold increase in the chance of a civil war.
Higher male secondary school enrollment, per capita income and economic growth rate all had significant effects on reducing the chance of civil war. Specifically, a male secondary school enrollment 10% above the average reduced the chance of a conflict by about 3%, while a growth rate 1% higher than the study average resulted in a decline in the chance of a civil war of about 1%. The study interpreted these three factors as proxies for earnings forgone by rebellion, and therefore that lower forgone earnings encourage rebellion. Phrased another way: young males (who make up the vast majority of combatants in civil wars) are less likely to join a rebellion if they are getting an education or have a comfortable salary, and can reasonably assume that they will prosper in the future.
Low per capita income has also been proposed as a cause for grievance, prompting armed rebellion. However, for this to be true, one would expect economic inequality to also be a significant factor in rebellions, which it is not. The study therefore concluded that the economic model of opportunity cost better explained the findings.
Most proxies for "grievance"—the theory that civil wars begin because of issues of identity, rather than economics—were statistically insignificant, including economic equality, political rights, ethnic polarization and religious fractionalization. Only ethnic dominance, the case where the largest ethnic group comprises a majority of the population, increased the risk of civil war. A country characterized by ethnic dominance has nearly twice the chance of a civil war. However, the combined effects of ethnic and religious fractionalization, i.e. the greater chance that any two randomly chosen people will be from separate ethnic or religious groups, the less chance of a civil war, were also significant and positive, as long as the country avoided ethnic dominance. The study interpreted this as stating that minority groups are more likely to rebel if they feel that they are being dominated, but that rebellions are more likely to occur the more homogeneous the population and thus more cohesive the rebels. These two factors may thus be seen as mitigating each other in many cases.
David Keen, a professor at the Development Studies Institute at the London School of Economics is one of the major critics of greed vs. grievance theory, defined primarily by Paul Collier, and argues the point that a conflict, although he cannot define it, cannot be pinpointed to simply one motive. He believes that conflicts are much more complex and thus should not be analyzed through simplified methods. He disagrees with the quantitative research methods of Collier and believes a stronger emphasis should be put on personal data and human perspective of the people in conflict.
Beyond Keen, several other authors have introduced works that either disprove greed vs. grievance theory with empirical data, or dismiss its ultimate conclusion. Authors such as Cristina Bodea and Ibrahim Elbadawi, who co-wrote the entry, "Riots, coups and civil war: Revisiting the greed and grievance debate", argue that empirical data can disprove many of the proponents of greed theory and make the idea "irrelevant". They examine a myriad of factors and conclude that too many factors come into play with conflict, which cannot be confined to simply greed or grievance.
Anthony Vinci makes a strong argument that "fungible concept of power and the primary motivation of survival provide superior explanations of armed group motivation and, more broadly, the conduct of internal conflicts".
James Fearon and David Laitin find that ethnic and religious diversity does not make civil war more likely. They instead find that factors that make it easier for rebels to recruit foot soldiers and sustain insurgencies, such as "poverty—which marks financially & bureaucratically weak states and also favors rebel recruitment—political instability, rough terrain, and large populations" make civil wars more likely.
Such research finds that civil wars happen because the state is weak; both authoritarian and democratic states can be stable if they have the financial and military capacity to put down rebellions.
In a state torn by civil war, the contesting powers often do not have the ability to commit or the trust to believe in the other side's commitment to put an end to war. When considering a peace agreement, the involved parties are aware of the high incentives to withdraw once one of them has taken an action that weakens their military, political or economical power. Commitment problems may deter a lasting peace agreement as the powers in question are aware that neither of them is able to commit to their end of the bargain in the future. States are often unable to escape conflict traps (recurring civil war conflicts) due to the lack of strong political and legal institutions that motivate bargaining, settle disputes, and enforce peace settlements.
Political scientist Barbara F. Walter suggests that most contemporary civil wars are actually repeats of earlier civil wars that often arise when leaders are not accountable to the public, when there is poor public participation in politics, and when there is a lack of transparency of information between the executives and the public. Walter argues that when these issues are properly reversed, they act as political and legal restraints on executive power forcing the established government to better serve the people. Additionally, these political and legal restraints create a standardized avenue to influence government and increase the commitment credibility of established peace treaties. It is the strength of a nation’s institutionalization and good governance—not the presence of democracy nor the poverty level—that is the number one indicator of the chance of a repeat civil war, according to Walter.
High levels of population dispersion and, to a lesser extent, the presence of mountainous terrain, increased the chance of conflict. Both of these factors favor rebels, as a population dispersed outward toward the borders is harder to control than one concentrated in a central region, while mountains offer terrain where rebels can seek sanctuary. Rough terrain was highlighted as one of the more important factors in a 2006 systematic review.
The various factors contributing to the risk of civil war rise increase with population size. The risk of a civil war rises approximately proportionately with the size of a country's population.
There is a correlation between poverty and civil war, but the causality (which causes the other) is unclear. Some studies have found that in regions with lower income per capita, the likelihood of civil war is greater. Economists Simeon Djankov and Marta Reynal-Querol argue that the correlation is spurious, and that lower income and heightened conflict are instead products of other phenomena. In contrast, a study by Alex Braithwaite and colleagues showed systematic evidence of "a causal arrow running from poverty to conflict".
While there is a supposed negative correlation between absolute welfare levels and the probability of civil war outbreak, relative deprivation may actually be a more pertinent possible cause. Historically, higher inequality levels led to higher civil war probability. Since colonial rule or population size are known to increase civil war risk, also, one may conclude that "the discontent of the colonized, caused by the creation of borders across tribal lines and bad treatment by the colonizers" is one important cause of civil conflicts.
The more time that has elapsed since the last civil war, the less likely it is that a conflict will recur. The study had two possible explanations for this: one opportunity-based and the other grievance-based. The elapsed time may represent the depreciation of whatever capital the rebellion was fought over and thus increase the opportunity cost of restarting the conflict. Alternatively, elapsed time may represent the gradual process of healing of old hatreds. The study found that the presence of a diaspora substantially reduced the positive effect of time, as the funding from diasporas offsets the depreciation of rebellion-specific capital.
Evolutionary psychologist Satoshi Kanazawa has argued that an important cause of intergroup conflict may be the relative availability of women of reproductive age. He found that polygyny greatly increased the frequency of civil wars but not interstate wars. Gleditsch et al. did not find a relationship between ethnic groups with polygyny and increased frequency of civil wars but nations having legal polygamy may have more civil wars. They argued that misogyny is a better explanation than polygyny. They found that increased women's rights were associated with fewer civil wars and that legal polygamy had no effect after women's rights were controlled for.
Political scholar Elisabeth Wood from Yale University offers yet another rationale for why civilians rebel and/or support civil war. Through her studies of the Salvadoran Civil War, Wood finds that traditional explanations of greed and grievance are not sufficient to explain the emergence of that insurgent movement. Instead, she argues that "emotional engagements" and "moral commitments" are the main reasons why thousand of civilians, most of them from poor and rural backgrounds, joined or supported the Farabundo Martí National Liberation Front, despite individually facing both high risks and virtually no foreseeable gains. Wood also attributes participation in the civil war to the value that insurgents assigned to changing social relations in El Salvador, an experience she defines as the "pleasure of agency".
Ann Hironaka, author of Neverending Wars, divides the modern history of civil wars into the pre-19th century, 19th century to early 20th century, and late 20th century. In 19th-century Europe, the length of civil wars fell significantly, largely due to the nature of the conflicts as battles for the power center of the state, the strength of centralized governments, and the normally quick and decisive intervention by other states to support the government. Following World War II the duration of civil wars grew past the norm of the pre-19th century, largely due to weakness of the many postcolonial states and the intervention by major powers on both sides of conflict. The most obvious commonality to civil wars are that they occur in fragile states.
Civil wars in the 19th century and in the early 20th century tended to be short; civil wars between 1900 and 1944 lasted on average one and half years. The state itself formed the obvious center of authority in the majority of cases, and the civil wars were thus fought for control of the state. This meant that whoever had control of the capital and the military could normally crush resistance. A rebellion which failed to quickly seize the capital and control of the military for itself normally found itself doomed to rapid destruction. For example, the fighting associated with the 1871 Paris Commune occurred almost entirely in Paris, and ended quickly once the military sided with the government at Versailles and conquered Paris.
The power of non-state actors resulted in a lower value placed on sovereignty in the 18th and 19th centuries, which further reduced the number of civil wars. For example, the pirates of the Barbary Coast were recognized as de facto states because of their military power. The Barbary pirates thus had no need to rebel against the Ottoman Empire – their nominal state government – to gain recognition of their sovereignty. Conversely, states such as Virginia and Massachusetts in the United States of America did not have sovereign status, but had significant political and economic independence coupled with weak federal control, reducing the incentive to secede.
The two major global ideologies, monarchism and democracy, led to several civil wars. However, a bi-polar world, divided between the two ideologies, did not develop, largely due to the dominance of monarchists through most of the period. The monarchists would thus normally intervene in other countries to stop democratic movements taking control and forming democratic governments, which were seen by monarchists as being both dangerous and unpredictable. The Great Powers (defined in the 1815 Congress of Vienna as the United Kingdom, Habsburg Austria, Prussia, France, and Russia) would frequently coordinate interventions in other nations' civil wars, nearly always on the side of the incumbent government. Given the military strength of the Great Powers, these interventions nearly always proved decisive and quickly ended the civil wars.
There were several exceptions from the general rule of quick civil wars during this period. The American Civil War (1861–1865) was unusual for at least two reasons: it was fought around regional identities as well as political ideologies, and it ended through a war of attrition, rather than with a decisive battle over control of the capital, as was the norm. The Spanish Civil War (1936–1939) proved exceptional because both sides in the struggle received support from intervening great powers: Germany, Italy, and Portugal supported opposition leader Francisco Franco, while France and the Soviet Union supported the government (see proxy war).
In the 1990s, about twenty civil wars were occurring concurrently during an average year, a rate about ten times the historical average since the 19th century. However, the rate of new civil wars had not increased appreciably; the drastic rise in the number of ongoing wars after World War II was a result of the tripling of the average duration of civil wars to over four years. This increase was a result of the increased number of states, the fragility of states formed after 1945, the decline in interstate war, and the Cold War rivalry.
Following World War II, the major European powers divested themselves of their colonies at an increasing rate: the number of ex-colonial states jumped from about 30 to almost 120 after the war. The rate of state formation leveled off in the 1980s, at which point few colonies remained. More states also meant more states in which to have long civil wars. Hironaka statistically measures the impact of the increased number of ex-colonial states as increasing the post-World War II incidence of civil wars by +165% over the pre-1945 number.
While the new ex-colonial states appeared to follow the blueprint of the idealized state—centralized government, territory enclosed by defined borders, and citizenry with defined rights—as well as accessories such as a national flag, an anthem, a seat at the United Nations and an official economic policy, they were in actuality far weaker than the Western states they were modeled after. In Western states, the structure of governments closely matched states' actual capabilities, which had been arduously developed over centuries. The development of strong administrative structures, in particular those related to extraction of taxes, is closely associated with the intense warfare between predatory European states in the 17th and 18th centuries, or in Charles Tilly's famous formulation: "War made the state and the state made war". For example, the formation of the modern states of Germany and Italy in the 19th century is closely associated with the wars of expansion and consolidation led by Prussia and Sardinia-Piedmont, respectively. The Western process of forming effective and impersonal bureaucracies, developing efficient tax systems, and integrating national territory continued into the 20th century. Nevertheless, Western states that survived into the latter half of the 20th century were considered "strong" by simple reason that they had managed to develop the institutional structures and military capability required to survive predation by their fellow states.
In sharp contrast, decolonization was an entirely different process of state formation. Most imperial powers had not foreseen a need to prepare their colonies for independence; for example, Britain had given limited self-rule to India and Sri Lanka, while treating British Somaliland as little more than a trading post, while all major decisions for French colonies were made in Paris and Belgium prohibited any self-government up until it suddenly granted independence to its colonies in 1960. Like Western states of previous centuries, the new ex-colonies lacked autonomous bureaucracies, which would make decisions based on the benefit to society as a whole, rather than respond to corruption and nepotism to favor a particular interest group. In such a situation, factions manipulate the state to benefit themselves or, alternatively, state leaders use the bureaucracy to further their own self-interest. The lack of credible governance was compounded by the fact that most colonies were economic loss-makers at independence, lacking both a productive economic base and a taxation system to effectively extract resources from economic activity. Among the rare states profitable at decolonization was India, to which scholars credibly argue that Uganda, Malaysia and Angola may be included. Neither did imperial powers make territorial integration a priority, and may have discouraged nascent nationalism as a danger to their rule. Many newly independent states thus found themselves impoverished, with minimal administrative capacity in a fragmented society, while faced with the expectation of immediately meeting the demands of a modern state. Such states are considered "weak" or "fragile". The "strong"-"weak" categorization is not the same as "Western"-"non-Western", as some Latin American states like Argentina and Brazil and Middle Eastern states like Egypt and Israel are considered to have "strong" administrative structures and economic infrastructure.
Historically, the international community would have targeted weak states for territorial absorption or colonial domination or, alternatively, such states would fragment into pieces small enough to be effectively administered and secured by a local power. However, international norms towards sovereignty changed in the wake of World War II in ways that support and maintain the existence of weak states. Weak states are given de jure sovereignty equal to that of other states, even when they do not have de facto sovereignty or control of their own territory, including the privileges of international diplomatic recognition and an equal vote in the United Nations. Further, the international community offers development aid to weak states, which helps maintain the facade of a functioning modern state by giving the appearance that the state is capable of fulfilling its implied responsibilities of control and order. The formation of a strong international law regime and norms against territorial aggression is strongly associated with the dramatic drop in the number of interstate wars, though it has also been attributed to the effect of the Cold War or to the changing nature of economic development. Consequently, military aggression that results in territorial annexation became increasingly likely to prompt international condemnation, diplomatic censure, a reduction in international aid or the introduction of economic sanction, or, as in the case of 1990 invasion of Kuwait by Iraq, international military intervention to reverse the territorial aggression. Similarly, the international community has largely refused to recognize secessionist regions, while keeping some secessionist self-declared states such as Somaliland in diplomatic recognition limbo. While there is not a large body of academic work examining the relationship, Hironaka's statistical study found a correlation that suggests that every major international anti-secessionist declaration increased the number of ongoing civil wars by +10%, or a total +114% from 1945 to 1997. The diplomatic and legal protection given by the international community, as well as economic support to weak governments and discouragement of secession, thus had the unintended effect of encouraging civil wars.
There has been an enormous amount of international intervention in civil wars since 1945 that some have argued served to extend wars. According to Patrick M. Regan in his book Civil Wars and Foreign Powers (2000) about 2/3rds of the 138 intrastate conflicts between the end of World War II and 2000 saw international intervention, with the United States intervening in 35 of these conflicts. While intervention has been practiced since the international system has existed, its nature changed substantially. It became common for both the state and opposition group to receive foreign support, allowing wars to continue well past the point when domestic resources had been exhausted. Superpowers, such as the European great powers, had always felt no compunction in intervening in civil wars that affected their interests, while distant regional powers such as the United States could declare the interventionist Monroe Doctrine of 1821 for events in its Central American "backyard". However, the large population of weak states after 1945 allowed intervention by former colonial powers, regional powers and neighboring states who themselves often had scarce resources.
The effectiveness of intervention is widely debated, in part because the data suffers from selection bias; as Fortna has argued, peacekeepers select themselves into difficult cases. When controlling for this effect, Forta holds that peacekeeping is resoundingly successful in shortening wars. However, other scholars disagree. Knaus and Stewart are extremely skeptical as to the effectiveness of interventions, holding that they can only work when they are performed with extreme caution and sensitivity to context, a strategy they label 'principled incrementalism'. Few interventions, for them, have demonstrated such an approach. Other scholars offer more specific criticisms; Dube and Naidu, for instance, show that US military aid, a less conventional form of intervention, seems to be siphoned off to paramilitaries thus exacerbating violence. Weinstein holds more generally that interventions might disrupt processes of 'autonomous recovery' whereby civil war contributes to state-building.
On average, a civil war with interstate intervention was 300% longer than those without. When disaggregated, a civil war with intervention on only one side is 156% longer, while when intervention occurs on both sides the average civil war is longer by an additional 92%. If one of the intervening states was a superpower, a civil war is a further 72% longer; a conflict such as the Angolan Civil War, in which there is two-sided foreign intervention, including by a superpower (actually, two superpowers in the case of Angola), would be 538% longer on average than a civil war without any international intervention.
The Cold War (1947–1991) provided a global network of material and ideological support that often helped perpetuate civil wars, which were mainly fought in weak ex-colonial states rather than the relatively strong states that were aligned with the Warsaw Pact and North Atlantic Treaty Organization. In some cases, superpowers would superimpose Cold War ideology onto local conflicts, while in others local actors using Cold War ideology would attract the attention of a superpower to obtain support. Using a separate statistical evaluation than used above for interventions, civil wars that included pro- or anti-communist forces lasted 141% longer than the average non-Cold War conflict, while a Cold War civil war that attracted superpower intervention resulted in wars typically lasting over three times as long as other civil wars. Conversely, the end of the Cold War marked by the fall of the Berlin Wall in 1989 resulted in a reduction in the duration of Cold War civil wars of 92% or, phrased another way, a roughly ten-fold increase in the rate of resolution of Cold War civil wars. Lengthy Cold War-associated civil conflicts that ground to a halt include the wars of Guatemala (1960–1996), El Salvador (1979–1991) and Nicaragua (1970–1990).
According to Barbara F. Walter,
post-2003 civil wars are different from previous civil wars in three striking ways. First, most of them are situated in Muslim-majority countries. Second, most of the rebel groups fighting these wars espouse radical Islamist ideas and goals. Third, most of these radical groups are pursuing transnational rather than national aims.
She argues
that the transformation of information technology, especially the advent of the Web 2.0 in the early 2000s, is the big new innovation that is likely driving many of these changes.
Civil wars often have severe economic consequences: two studies estimate that each year of civil war reduces a country's GDP growth by about 2%. It also has a regional effect, reducing the GDP growth of neighboring countries. Civil wars also have the potential to lock the country in a conflict trap, where each conflict increases the likelihood of future conflict. | [
{
"paragraph_id": 0,
"text": "A civil war is a war between organized groups within the same state (or country). The aim of one side may be to take control of the country or a region, to achieve independence for a region, or to change government policies. The term is a calque of Latin bellum civile which was used to refer to the various civil wars of the Roman Republic in the 1st century BC.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Most modern civil wars involve intervention by outside powers. According to Patrick M. Regan in his book Civil Wars and Foreign Powers (2000) about two thirds of the 138 intrastate conflicts between the end of World War II and 2000 saw international intervention.",
"title": ""
},
{
"paragraph_id": 2,
"text": "A civil war is often a high-intensity conflict, often involving regular armed forces, that is sustained, organized and large-scale. Civil wars may result in large numbers of casualties and the consumption of significant resources.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Civil wars since the end of World War II have lasted on average just over four years, a dramatic rise from the one-and-a-half-year average of the 1900–1944 period. While the rate of emergence of new civil wars has been relatively steady since the mid-19th century, the increasing length of those wars has resulted in increasing numbers of wars ongoing at any one time. For example, there were no more than five civil wars underway simultaneously in the first half of the 20th century while there were over 20 concurrent civil wars close to the end of the Cold War. Since 1945, civil wars have resulted in the deaths of over 25 million people, as well as the forced displacement of millions more. Civil wars have further resulted in economic collapse; Somalia, Burma (Myanmar), Uganda and Angola are examples of nations that were considered to have had promising futures before being engulfed in civil wars.",
"title": ""
},
{
"paragraph_id": 4,
"text": "James Fearon, a scholar of civil wars at Stanford University, defines a civil war as \"a violent conflict within a country fought by organized groups that aim to take power at the center or in a region, or to change government policies\". Ann Hironaka further specifies that one side of a civil war is the state. Stathis Kalyvas defines civil war as \"armed combat taking place within the boundaries of a recognized sovereign entity between parties that are subject to a common authority at the outset of the hostilities.\" The intensity at which a civil disturbance becomes a civil war is contested by academics. Some political scientists define a civil war as having more than 1,000 casualties, while others further specify that at least 100 must come from each side. The Correlates of War, a dataset widely used by scholars of conflict, classifies civil wars as having over 1000 war-related casualties per year of conflict. This rate is a small fraction of the millions killed in the Second Sudanese Civil War and Cambodian Civil War, for example, but excludes several highly publicized conflicts, such as The Troubles of Northern Ireland and the struggle of the African National Congress in Apartheid-era South Africa.",
"title": "Formal classification"
},
{
"paragraph_id": 5,
"text": "Based on the 1,000-casualties-per-year criterion, there were 213 civil wars from 1816 to 1997, 104 of which occurred from 1944 to 1997. If one uses the less-stringent 1,000 casualties total criterion, there were over 90 civil wars between 1945 and 2007, with 20 ongoing civil wars as of 2007.",
"title": "Formal classification"
},
{
"paragraph_id": 6,
"text": "The Geneva Conventions do not specifically define the term \"civil war\"; nevertheless, they do outline the responsibilities of parties in \"armed conflict not of an international character\". This includes civil wars; however, no specific definition of civil war is provided in the text of the Conventions.",
"title": "Formal classification"
},
{
"paragraph_id": 7,
"text": "Nevertheless, the International Committee of the Red Cross has sought to provide some clarification through its commentaries on the Geneva Conventions, noting that the Conventions are \"so general, so vague, that many of the delegations feared that it might be taken to cover any act committed by force of arms\". Accordingly, the commentaries provide for different 'conditions' on which the application of the Geneva Convention would depend; the commentary, however, points out that these should not be interpreted as rigid conditions. The conditions listed by the ICRC in its commentary are as follows:",
"title": "Formal classification"
},
{
"paragraph_id": 8,
"text": "(b) That it has claimed for itself the rights of a belligerent; or",
"title": "Formal classification"
},
{
"paragraph_id": 9,
"text": "(c) That it has accorded the insurgents recognition as belligerents for the purposes only of the present Convention; or",
"title": "Formal classification"
},
{
"paragraph_id": 10,
"text": "(d) That the dispute has been admitted to the agenda of the Security Council or the General Assembly of the United Nations as being a threat to international peace, a breach of the peace, or an act of aggression.",
"title": "Formal classification"
},
{
"paragraph_id": 11,
"text": "(b) That the insurgent civil authority exercises de facto authority over the population within a determinate portion of the national territory.",
"title": "Formal classification"
},
{
"paragraph_id": 12,
"text": "(c) That the armed forces act under the direction of an organized authority and are prepared to observe the ordinary laws of war.",
"title": "Formal classification"
},
{
"paragraph_id": 13,
"text": "(d) That the insurgent civil authority agrees to be bound by the provisions of the Convention.",
"title": "Formal classification"
},
{
"paragraph_id": 14,
"text": "According to a 2017 review study of civil war research, there are three prominent explanations for civil war: greed-based explanations which center on individuals’ desire to maximize their profits, grievance-based explanations which center on conflict as a response to socioeconomic or political injustice, and opportunity-based explanations which center on factors that make it easier to engage in violent mobilization. According to the study, the most influential explanation for civil war onset is the opportunity-based explanation by James Fearon and David Laitin in their 2003 American Political Science Review article.",
"title": "Causes"
},
{
"paragraph_id": 15,
"text": "Scholars investigating the cause of civil war are attracted by two opposing theories, greed versus grievance. Roughly stated: are conflicts caused by differences of ethnicity, religion or other social affiliation, or do conflicts begin because it is in the economic best interests of individuals and groups to start them? Scholarly analysis supports the conclusion that economic and structural factors are more important than those of identity in predicting occurrences of civil war.",
"title": "Causes"
},
{
"paragraph_id": 16,
"text": "A comprehensive study of civil war was carried out by a team from the World Bank in the early 21st century. The study framework, which came to be called the Collier–Hoeffler Model, examined 78 five-year increments when civil war occurred from 1960 to 1999, as well as 1,167 five-year increments of \"no civil war\" for comparison, and subjected the data set to regression analysis to see the effect of various factors. The factors that were shown to have a statistically significant effect on the chance that a civil war would occur in any given five-year period were:",
"title": "Causes"
},
{
"paragraph_id": 17,
"text": "A high proportion of primary commodities in national exports significantly increases the risk of a conflict. A country at \"peak danger\", with commodities comprising 32% of gross domestic product, has a 22% risk of falling into civil war in a given five-year period, while a country with no primary commodity exports has a 1% risk. When disaggregated, only petroleum and non-petroleum groupings showed different results: a country with relatively low levels of dependence on petroleum exports is at slightly less risk, while a high level of dependence on oil as an export results in slightly more risk of a civil war than national dependence on another primary commodity. The authors of the study interpreted this as being the result of the ease by which primary commodities may be extorted or captured compared to other forms of wealth; for example, it is easy to capture and control the output of a gold mine or oil field compared to a sector of garment manufacturing or hospitality services.",
"title": "Causes"
},
{
"paragraph_id": 18,
"text": "A second source of finance is national diasporas, which can fund rebellions and insurgencies from abroad. The study found that statistically switching the size of a country's diaspora from the smallest found in the study to the largest resulted in a sixfold increase in the chance of a civil war.",
"title": "Causes"
},
{
"paragraph_id": 19,
"text": "Higher male secondary school enrollment, per capita income and economic growth rate all had significant effects on reducing the chance of civil war. Specifically, a male secondary school enrollment 10% above the average reduced the chance of a conflict by about 3%, while a growth rate 1% higher than the study average resulted in a decline in the chance of a civil war of about 1%. The study interpreted these three factors as proxies for earnings forgone by rebellion, and therefore that lower forgone earnings encourage rebellion. Phrased another way: young males (who make up the vast majority of combatants in civil wars) are less likely to join a rebellion if they are getting an education or have a comfortable salary, and can reasonably assume that they will prosper in the future.",
"title": "Causes"
},
{
"paragraph_id": 20,
"text": "Low per capita income has also been proposed as a cause for grievance, prompting armed rebellion. However, for this to be true, one would expect economic inequality to also be a significant factor in rebellions, which it is not. The study therefore concluded that the economic model of opportunity cost better explained the findings.",
"title": "Causes"
},
{
"paragraph_id": 21,
"text": "Most proxies for \"grievance\"—the theory that civil wars begin because of issues of identity, rather than economics—were statistically insignificant, including economic equality, political rights, ethnic polarization and religious fractionalization. Only ethnic dominance, the case where the largest ethnic group comprises a majority of the population, increased the risk of civil war. A country characterized by ethnic dominance has nearly twice the chance of a civil war. However, the combined effects of ethnic and religious fractionalization, i.e. the greater chance that any two randomly chosen people will be from separate ethnic or religious groups, the less chance of a civil war, were also significant and positive, as long as the country avoided ethnic dominance. The study interpreted this as stating that minority groups are more likely to rebel if they feel that they are being dominated, but that rebellions are more likely to occur the more homogeneous the population and thus more cohesive the rebels. These two factors may thus be seen as mitigating each other in many cases.",
"title": "Causes"
},
{
"paragraph_id": 22,
"text": "David Keen, a professor at the Development Studies Institute at the London School of Economics is one of the major critics of greed vs. grievance theory, defined primarily by Paul Collier, and argues the point that a conflict, although he cannot define it, cannot be pinpointed to simply one motive. He believes that conflicts are much more complex and thus should not be analyzed through simplified methods. He disagrees with the quantitative research methods of Collier and believes a stronger emphasis should be put on personal data and human perspective of the people in conflict.",
"title": "Causes"
},
{
"paragraph_id": 23,
"text": "Beyond Keen, several other authors have introduced works that either disprove greed vs. grievance theory with empirical data, or dismiss its ultimate conclusion. Authors such as Cristina Bodea and Ibrahim Elbadawi, who co-wrote the entry, \"Riots, coups and civil war: Revisiting the greed and grievance debate\", argue that empirical data can disprove many of the proponents of greed theory and make the idea \"irrelevant\". They examine a myriad of factors and conclude that too many factors come into play with conflict, which cannot be confined to simply greed or grievance.",
"title": "Causes"
},
{
"paragraph_id": 24,
"text": "Anthony Vinci makes a strong argument that \"fungible concept of power and the primary motivation of survival provide superior explanations of armed group motivation and, more broadly, the conduct of internal conflicts\".",
"title": "Causes"
},
{
"paragraph_id": 25,
"text": "James Fearon and David Laitin find that ethnic and religious diversity does not make civil war more likely. They instead find that factors that make it easier for rebels to recruit foot soldiers and sustain insurgencies, such as \"poverty—which marks financially & bureaucratically weak states and also favors rebel recruitment—political instability, rough terrain, and large populations\" make civil wars more likely.",
"title": "Causes"
},
{
"paragraph_id": 26,
"text": "Such research finds that civil wars happen because the state is weak; both authoritarian and democratic states can be stable if they have the financial and military capacity to put down rebellions.",
"title": "Causes"
},
{
"paragraph_id": 27,
"text": "In a state torn by civil war, the contesting powers often do not have the ability to commit or the trust to believe in the other side's commitment to put an end to war. When considering a peace agreement, the involved parties are aware of the high incentives to withdraw once one of them has taken an action that weakens their military, political or economical power. Commitment problems may deter a lasting peace agreement as the powers in question are aware that neither of them is able to commit to their end of the bargain in the future. States are often unable to escape conflict traps (recurring civil war conflicts) due to the lack of strong political and legal institutions that motivate bargaining, settle disputes, and enforce peace settlements.",
"title": "Causes"
},
{
"paragraph_id": 28,
"text": "Political scientist Barbara F. Walter suggests that most contemporary civil wars are actually repeats of earlier civil wars that often arise when leaders are not accountable to the public, when there is poor public participation in politics, and when there is a lack of transparency of information between the executives and the public. Walter argues that when these issues are properly reversed, they act as political and legal restraints on executive power forcing the established government to better serve the people. Additionally, these political and legal restraints create a standardized avenue to influence government and increase the commitment credibility of established peace treaties. It is the strength of a nation’s institutionalization and good governance—not the presence of democracy nor the poverty level—that is the number one indicator of the chance of a repeat civil war, according to Walter.",
"title": "Causes"
},
{
"paragraph_id": 29,
"text": "High levels of population dispersion and, to a lesser extent, the presence of mountainous terrain, increased the chance of conflict. Both of these factors favor rebels, as a population dispersed outward toward the borders is harder to control than one concentrated in a central region, while mountains offer terrain where rebels can seek sanctuary. Rough terrain was highlighted as one of the more important factors in a 2006 systematic review.",
"title": "Causes"
},
{
"paragraph_id": 30,
"text": "The various factors contributing to the risk of civil war rise increase with population size. The risk of a civil war rises approximately proportionately with the size of a country's population.",
"title": "Causes"
},
{
"paragraph_id": 31,
"text": "There is a correlation between poverty and civil war, but the causality (which causes the other) is unclear. Some studies have found that in regions with lower income per capita, the likelihood of civil war is greater. Economists Simeon Djankov and Marta Reynal-Querol argue that the correlation is spurious, and that lower income and heightened conflict are instead products of other phenomena. In contrast, a study by Alex Braithwaite and colleagues showed systematic evidence of \"a causal arrow running from poverty to conflict\".",
"title": "Causes"
},
{
"paragraph_id": 32,
"text": "While there is a supposed negative correlation between absolute welfare levels and the probability of civil war outbreak, relative deprivation may actually be a more pertinent possible cause. Historically, higher inequality levels led to higher civil war probability. Since colonial rule or population size are known to increase civil war risk, also, one may conclude that \"the discontent of the colonized, caused by the creation of borders across tribal lines and bad treatment by the colonizers\" is one important cause of civil conflicts.",
"title": "Causes"
},
{
"paragraph_id": 33,
"text": "The more time that has elapsed since the last civil war, the less likely it is that a conflict will recur. The study had two possible explanations for this: one opportunity-based and the other grievance-based. The elapsed time may represent the depreciation of whatever capital the rebellion was fought over and thus increase the opportunity cost of restarting the conflict. Alternatively, elapsed time may represent the gradual process of healing of old hatreds. The study found that the presence of a diaspora substantially reduced the positive effect of time, as the funding from diasporas offsets the depreciation of rebellion-specific capital.",
"title": "Causes"
},
{
"paragraph_id": 34,
"text": "Evolutionary psychologist Satoshi Kanazawa has argued that an important cause of intergroup conflict may be the relative availability of women of reproductive age. He found that polygyny greatly increased the frequency of civil wars but not interstate wars. Gleditsch et al. did not find a relationship between ethnic groups with polygyny and increased frequency of civil wars but nations having legal polygamy may have more civil wars. They argued that misogyny is a better explanation than polygyny. They found that increased women's rights were associated with fewer civil wars and that legal polygamy had no effect after women's rights were controlled for.",
"title": "Causes"
},
{
"paragraph_id": 35,
"text": "Political scholar Elisabeth Wood from Yale University offers yet another rationale for why civilians rebel and/or support civil war. Through her studies of the Salvadoran Civil War, Wood finds that traditional explanations of greed and grievance are not sufficient to explain the emergence of that insurgent movement. Instead, she argues that \"emotional engagements\" and \"moral commitments\" are the main reasons why thousand of civilians, most of them from poor and rural backgrounds, joined or supported the Farabundo Martí National Liberation Front, despite individually facing both high risks and virtually no foreseeable gains. Wood also attributes participation in the civil war to the value that insurgents assigned to changing social relations in El Salvador, an experience she defines as the \"pleasure of agency\".",
"title": "Causes"
},
{
"paragraph_id": 36,
"text": "Ann Hironaka, author of Neverending Wars, divides the modern history of civil wars into the pre-19th century, 19th century to early 20th century, and late 20th century. In 19th-century Europe, the length of civil wars fell significantly, largely due to the nature of the conflicts as battles for the power center of the state, the strength of centralized governments, and the normally quick and decisive intervention by other states to support the government. Following World War II the duration of civil wars grew past the norm of the pre-19th century, largely due to weakness of the many postcolonial states and the intervention by major powers on both sides of conflict. The most obvious commonality to civil wars are that they occur in fragile states.",
"title": "Duration and effects"
},
{
"paragraph_id": 37,
"text": "Civil wars in the 19th century and in the early 20th century tended to be short; civil wars between 1900 and 1944 lasted on average one and half years. The state itself formed the obvious center of authority in the majority of cases, and the civil wars were thus fought for control of the state. This meant that whoever had control of the capital and the military could normally crush resistance. A rebellion which failed to quickly seize the capital and control of the military for itself normally found itself doomed to rapid destruction. For example, the fighting associated with the 1871 Paris Commune occurred almost entirely in Paris, and ended quickly once the military sided with the government at Versailles and conquered Paris.",
"title": "Duration and effects"
},
{
"paragraph_id": 38,
"text": "The power of non-state actors resulted in a lower value placed on sovereignty in the 18th and 19th centuries, which further reduced the number of civil wars. For example, the pirates of the Barbary Coast were recognized as de facto states because of their military power. The Barbary pirates thus had no need to rebel against the Ottoman Empire – their nominal state government – to gain recognition of their sovereignty. Conversely, states such as Virginia and Massachusetts in the United States of America did not have sovereign status, but had significant political and economic independence coupled with weak federal control, reducing the incentive to secede.",
"title": "Duration and effects"
},
{
"paragraph_id": 39,
"text": "The two major global ideologies, monarchism and democracy, led to several civil wars. However, a bi-polar world, divided between the two ideologies, did not develop, largely due to the dominance of monarchists through most of the period. The monarchists would thus normally intervene in other countries to stop democratic movements taking control and forming democratic governments, which were seen by monarchists as being both dangerous and unpredictable. The Great Powers (defined in the 1815 Congress of Vienna as the United Kingdom, Habsburg Austria, Prussia, France, and Russia) would frequently coordinate interventions in other nations' civil wars, nearly always on the side of the incumbent government. Given the military strength of the Great Powers, these interventions nearly always proved decisive and quickly ended the civil wars.",
"title": "Duration and effects"
},
{
"paragraph_id": 40,
"text": "There were several exceptions from the general rule of quick civil wars during this period. The American Civil War (1861–1865) was unusual for at least two reasons: it was fought around regional identities as well as political ideologies, and it ended through a war of attrition, rather than with a decisive battle over control of the capital, as was the norm. The Spanish Civil War (1936–1939) proved exceptional because both sides in the struggle received support from intervening great powers: Germany, Italy, and Portugal supported opposition leader Francisco Franco, while France and the Soviet Union supported the government (see proxy war).",
"title": "Duration and effects"
},
{
"paragraph_id": 41,
"text": "In the 1990s, about twenty civil wars were occurring concurrently during an average year, a rate about ten times the historical average since the 19th century. However, the rate of new civil wars had not increased appreciably; the drastic rise in the number of ongoing wars after World War II was a result of the tripling of the average duration of civil wars to over four years. This increase was a result of the increased number of states, the fragility of states formed after 1945, the decline in interstate war, and the Cold War rivalry.",
"title": "Duration and effects"
},
{
"paragraph_id": 42,
"text": "Following World War II, the major European powers divested themselves of their colonies at an increasing rate: the number of ex-colonial states jumped from about 30 to almost 120 after the war. The rate of state formation leveled off in the 1980s, at which point few colonies remained. More states also meant more states in which to have long civil wars. Hironaka statistically measures the impact of the increased number of ex-colonial states as increasing the post-World War II incidence of civil wars by +165% over the pre-1945 number.",
"title": "Duration and effects"
},
{
"paragraph_id": 43,
"text": "While the new ex-colonial states appeared to follow the blueprint of the idealized state—centralized government, territory enclosed by defined borders, and citizenry with defined rights—as well as accessories such as a national flag, an anthem, a seat at the United Nations and an official economic policy, they were in actuality far weaker than the Western states they were modeled after. In Western states, the structure of governments closely matched states' actual capabilities, which had been arduously developed over centuries. The development of strong administrative structures, in particular those related to extraction of taxes, is closely associated with the intense warfare between predatory European states in the 17th and 18th centuries, or in Charles Tilly's famous formulation: \"War made the state and the state made war\". For example, the formation of the modern states of Germany and Italy in the 19th century is closely associated with the wars of expansion and consolidation led by Prussia and Sardinia-Piedmont, respectively. The Western process of forming effective and impersonal bureaucracies, developing efficient tax systems, and integrating national territory continued into the 20th century. Nevertheless, Western states that survived into the latter half of the 20th century were considered \"strong\" by simple reason that they had managed to develop the institutional structures and military capability required to survive predation by their fellow states.",
"title": "Duration and effects"
},
{
"paragraph_id": 44,
"text": "In sharp contrast, decolonization was an entirely different process of state formation. Most imperial powers had not foreseen a need to prepare their colonies for independence; for example, Britain had given limited self-rule to India and Sri Lanka, while treating British Somaliland as little more than a trading post, while all major decisions for French colonies were made in Paris and Belgium prohibited any self-government up until it suddenly granted independence to its colonies in 1960. Like Western states of previous centuries, the new ex-colonies lacked autonomous bureaucracies, which would make decisions based on the benefit to society as a whole, rather than respond to corruption and nepotism to favor a particular interest group. In such a situation, factions manipulate the state to benefit themselves or, alternatively, state leaders use the bureaucracy to further their own self-interest. The lack of credible governance was compounded by the fact that most colonies were economic loss-makers at independence, lacking both a productive economic base and a taxation system to effectively extract resources from economic activity. Among the rare states profitable at decolonization was India, to which scholars credibly argue that Uganda, Malaysia and Angola may be included. Neither did imperial powers make territorial integration a priority, and may have discouraged nascent nationalism as a danger to their rule. Many newly independent states thus found themselves impoverished, with minimal administrative capacity in a fragmented society, while faced with the expectation of immediately meeting the demands of a modern state. Such states are considered \"weak\" or \"fragile\". The \"strong\"-\"weak\" categorization is not the same as \"Western\"-\"non-Western\", as some Latin American states like Argentina and Brazil and Middle Eastern states like Egypt and Israel are considered to have \"strong\" administrative structures and economic infrastructure.",
"title": "Duration and effects"
},
{
"paragraph_id": 45,
"text": "Historically, the international community would have targeted weak states for territorial absorption or colonial domination or, alternatively, such states would fragment into pieces small enough to be effectively administered and secured by a local power. However, international norms towards sovereignty changed in the wake of World War II in ways that support and maintain the existence of weak states. Weak states are given de jure sovereignty equal to that of other states, even when they do not have de facto sovereignty or control of their own territory, including the privileges of international diplomatic recognition and an equal vote in the United Nations. Further, the international community offers development aid to weak states, which helps maintain the facade of a functioning modern state by giving the appearance that the state is capable of fulfilling its implied responsibilities of control and order. The formation of a strong international law regime and norms against territorial aggression is strongly associated with the dramatic drop in the number of interstate wars, though it has also been attributed to the effect of the Cold War or to the changing nature of economic development. Consequently, military aggression that results in territorial annexation became increasingly likely to prompt international condemnation, diplomatic censure, a reduction in international aid or the introduction of economic sanction, or, as in the case of 1990 invasion of Kuwait by Iraq, international military intervention to reverse the territorial aggression. Similarly, the international community has largely refused to recognize secessionist regions, while keeping some secessionist self-declared states such as Somaliland in diplomatic recognition limbo. While there is not a large body of academic work examining the relationship, Hironaka's statistical study found a correlation that suggests that every major international anti-secessionist declaration increased the number of ongoing civil wars by +10%, or a total +114% from 1945 to 1997. The diplomatic and legal protection given by the international community, as well as economic support to weak governments and discouragement of secession, thus had the unintended effect of encouraging civil wars.",
"title": "Duration and effects"
},
{
"paragraph_id": 46,
"text": "There has been an enormous amount of international intervention in civil wars since 1945 that some have argued served to extend wars. According to Patrick M. Regan in his book Civil Wars and Foreign Powers (2000) about 2/3rds of the 138 intrastate conflicts between the end of World War II and 2000 saw international intervention, with the United States intervening in 35 of these conflicts. While intervention has been practiced since the international system has existed, its nature changed substantially. It became common for both the state and opposition group to receive foreign support, allowing wars to continue well past the point when domestic resources had been exhausted. Superpowers, such as the European great powers, had always felt no compunction in intervening in civil wars that affected their interests, while distant regional powers such as the United States could declare the interventionist Monroe Doctrine of 1821 for events in its Central American \"backyard\". However, the large population of weak states after 1945 allowed intervention by former colonial powers, regional powers and neighboring states who themselves often had scarce resources.",
"title": "Duration and effects"
},
{
"paragraph_id": 47,
"text": "The effectiveness of intervention is widely debated, in part because the data suffers from selection bias; as Fortna has argued, peacekeepers select themselves into difficult cases. When controlling for this effect, Forta holds that peacekeeping is resoundingly successful in shortening wars. However, other scholars disagree. Knaus and Stewart are extremely skeptical as to the effectiveness of interventions, holding that they can only work when they are performed with extreme caution and sensitivity to context, a strategy they label 'principled incrementalism'. Few interventions, for them, have demonstrated such an approach. Other scholars offer more specific criticisms; Dube and Naidu, for instance, show that US military aid, a less conventional form of intervention, seems to be siphoned off to paramilitaries thus exacerbating violence. Weinstein holds more generally that interventions might disrupt processes of 'autonomous recovery' whereby civil war contributes to state-building.",
"title": "Duration and effects"
},
{
"paragraph_id": 48,
"text": "On average, a civil war with interstate intervention was 300% longer than those without. When disaggregated, a civil war with intervention on only one side is 156% longer, while when intervention occurs on both sides the average civil war is longer by an additional 92%. If one of the intervening states was a superpower, a civil war is a further 72% longer; a conflict such as the Angolan Civil War, in which there is two-sided foreign intervention, including by a superpower (actually, two superpowers in the case of Angola), would be 538% longer on average than a civil war without any international intervention.",
"title": "Duration and effects"
},
{
"paragraph_id": 49,
"text": "The Cold War (1947–1991) provided a global network of material and ideological support that often helped perpetuate civil wars, which were mainly fought in weak ex-colonial states rather than the relatively strong states that were aligned with the Warsaw Pact and North Atlantic Treaty Organization. In some cases, superpowers would superimpose Cold War ideology onto local conflicts, while in others local actors using Cold War ideology would attract the attention of a superpower to obtain support. Using a separate statistical evaluation than used above for interventions, civil wars that included pro- or anti-communist forces lasted 141% longer than the average non-Cold War conflict, while a Cold War civil war that attracted superpower intervention resulted in wars typically lasting over three times as long as other civil wars. Conversely, the end of the Cold War marked by the fall of the Berlin Wall in 1989 resulted in a reduction in the duration of Cold War civil wars of 92% or, phrased another way, a roughly ten-fold increase in the rate of resolution of Cold War civil wars. Lengthy Cold War-associated civil conflicts that ground to a halt include the wars of Guatemala (1960–1996), El Salvador (1979–1991) and Nicaragua (1970–1990).",
"title": "Duration and effects"
},
{
"paragraph_id": 50,
"text": "According to Barbara F. Walter,",
"title": "Duration and effects"
},
{
"paragraph_id": 51,
"text": "post-2003 civil wars are different from previous civil wars in three striking ways. First, most of them are situated in Muslim-majority countries. Second, most of the rebel groups fighting these wars espouse radical Islamist ideas and goals. Third, most of these radical groups are pursuing transnational rather than national aims.",
"title": "Duration and effects"
},
{
"paragraph_id": 52,
"text": "She argues",
"title": "Duration and effects"
},
{
"paragraph_id": 53,
"text": "that the transformation of information technology, especially the advent of the Web 2.0 in the early 2000s, is the big new innovation that is likely driving many of these changes.",
"title": "Duration and effects"
},
{
"paragraph_id": 54,
"text": "Civil wars often have severe economic consequences: two studies estimate that each year of civil war reduces a country's GDP growth by about 2%. It also has a regional effect, reducing the GDP growth of neighboring countries. Civil wars also have the potential to lock the country in a conflict trap, where each conflict increases the likelihood of future conflict.",
"title": "Duration and effects"
}
] | A civil war is a war between organized groups within the same state.
The aim of one side may be to take control of the country or a region, to achieve independence for a region, or to change government policies.
The term is a calque of Latin bellum civile which was used to refer to the various civil wars of the Roman Republic in the 1st century BC. Most modern civil wars involve intervention by outside powers. According to Patrick M. Regan in his book Civil Wars and Foreign Powers (2000) about two thirds of the 138 intrastate conflicts between the end of World War II and 2000 saw international intervention. A civil war is often a high-intensity conflict, often involving regular armed forces, that is sustained, organized and large-scale. Civil wars may result in large numbers of casualties and the consumption of significant resources. Civil wars since the end of World War II have lasted on average just over four years, a dramatic rise from the one-and-a-half-year average of the 1900–1944 period. While the rate of emergence of new civil wars has been relatively steady since the mid-19th century, the increasing length of those wars has resulted in increasing numbers of wars ongoing at any one time. For example, there were no more than five civil wars underway simultaneously in the first half of the 20th century while there were over 20 concurrent civil wars close to the end of the Cold War. Since 1945, civil wars have resulted in the deaths of over 25 million people, as well as the forced displacement of millions more. Civil wars have further resulted in economic collapse; Somalia, Burma (Myanmar), Uganda and Angola are examples of nations that were considered to have had promising futures before being engulfed in civil wars. | 2001-11-12T18:01:11Z | 2023-11-28T04:17:48Z | [
"Template:Pp-semi-protected",
"Template:War",
"Template:Efn",
"Template:Cite book",
"Template:Harvnb",
"Template:Reflist",
"Template:Cite web",
"Template:Cite news",
"Template:Commons category",
"Template:Wiktionary",
"Template:Authority control",
"Template:Broaden",
"Template:Sfn",
"Template:Notelist",
"Template:Webarchive",
"Template:Cite journal",
"Template:ISBN",
"Template:Short description",
"Template:For",
"Template:Redirect",
"Template:Clarify",
"Template:Quote"
] | https://en.wikipedia.org/wiki/Civil_war |
7,088 | List of cryptographers | This is a list of cryptographers. Cryptography is the practice and study of techniques for secure communication in the presence of third parties called adversaries.
See also: Category:Modern cryptographers for a more exhaustive list. | [
{
"paragraph_id": 0,
"text": "This is a list of cryptographers. Cryptography is the practice and study of techniques for secure communication in the presence of third parties called adversaries.",
"title": ""
},
{
"paragraph_id": 1,
"text": "See also: Category:Modern cryptographers for a more exhaustive list.",
"title": "Modern"
}
] | This is a list of cryptographers. Cryptography is the practice and study of techniques for secure communication in the presence of third parties called adversaries. | 2001-11-13T00:33:40Z | 2023-11-24T02:52:09Z | [
"Template:Cite book",
"Template:Cite magazine",
"Template:Wiktionary",
"Template:Short description",
"Template:Use dmy dates",
"Template:Reflist"
] | https://en.wikipedia.org/wiki/List_of_cryptographers |
7,089 | Chocolate | Chocolate or cocoa is a food made from roasted and ground cacao seed kernels that is available as a liquid, solid, or paste, either on its own or as a flavoring agent in other foods. Cacao has been consumed in some form since at least the Olmec civilization (19th–11th century BCE), and later Mesoamerican civilizations also consumed chocolate beverages before being introduced to Europe in the 16th century.
The seeds of the cacao tree have an intense bitter taste and must be fermented to develop the flavor. After fermentation, the seeds are dried, cleaned, and roasted. The shell is removed to produce cocoa nibs, which are then ground to cocoa mass, unadulterated chocolate in rough form. Once the cocoa mass is liquefied by heating, it is called chocolate liquor. The liquor may also be cooled and processed into its two components: cocoa solids and cocoa butter. Baking chocolate, also called bitter chocolate, contains cocoa solids and cocoa butter in varying proportions without any added sugar. Powdered baking cocoa, which contains more fiber than cocoa butter, can be processed with alkali to produce Dutch cocoa. Much of the chocolate consumed today is in the form of sweet chocolate, a combination of cocoa solids, cocoa butter, or added vegetable oils and sugar. Milk chocolate is sweet chocolate that additionally contains milk powder or condensed milk. White chocolate contains cocoa butter, sugar, and milk, but no cocoa solids.
Chocolate is one of the most popular food types and flavors in the world, and many foodstuffs involving chocolate exist, particularly desserts, including cakes, pudding, mousse, chocolate brownies, and chocolate chip cookies. Many candies are filled with or coated with sweetened chocolate. Chocolate bars, either made of solid chocolate or other ingredients coated in chocolate, are eaten as snacks. Gifts of chocolate molded into different shapes (such as eggs, hearts, and coins) are traditional on certain Western holidays, including Christmas, Easter, Valentine's Day, and Hanukkah. Chocolate is also used in cold and hot beverages, such as chocolate milk and hot chocolate, and in some alcoholic drinks, such as creme de cacao.
Although cocoa originated in the Americas, West African countries, particularly Côte d'Ivoire and Ghana, are the leading producers of cocoa in the 21st century, accounting for some 60% of the world cocoa supply.
With some two million children involved in the farming of cocoa in West Africa, child slavery and trafficking associated with the cocoa trade remain major concerns. A 2018 report argued that international attempts to improve conditions for children were doomed to failure because of persistent poverty, the absence of schools, increasing world cocoa demand, more intensive farming of cocoa, and continued exploitation of child labor.
Chocolate has been prepared as a drink for nearly all of its history. For example, one vessel found at an Olmec archaeological site on the Gulf Coast of Veracruz, Mexico, dates chocolate's preparation by pre-Olmec peoples as early as 1750 BC. On the Pacific coast of Chiapas, Mexico, a Mokaya archaeological site provides evidence of cocoa beverages dating even earlier to 1900 BC. The residues and the kind of vessel in which they were found indicate the initial use of cocoa was not simply as a beverage; the white pulp around the cocoa beans was likely used as a source of fermentable sugars for an alcoholic drink.
An early Classic-period (460–480 AD) Maya tomb from the site in Rio Azul had vessels with the Maya glyph for cocoa on them with residue of a chocolate drink, which suggests that the Maya were drinking chocolate around 400 AD. Documents in Maya hieroglyphs stated that chocolate was used for ceremonial purposes in addition to everyday life. The Maya grew cacao trees in their backyards and used the cocoa seeds the trees produced to make a frothy, bitter drink.
By the 15th century, the Aztecs had gained control of a large part of Mesoamerica and had adopted cocoa into their culture. They associated chocolate with Quetzalcoatl, who, according to one legend, was cast away by the other gods for sharing chocolate with humans, and identified its extrication from the pod with the removal of the human heart in sacrifice. In contrast to the Maya, who liked their chocolate warm, the Aztecs drank it cold, seasoning it with a broad variety of additives, including the petals of the Cymbopetalum penduliflorum tree, chili pepper, allspice, vanilla, and honey.
The Aztecs were unable to grow cocoa themselves, as their home in the Mexican highlands was unsuitable for it, so chocolate was a luxury imported into the empire. Those who lived in areas ruled by the Aztecs were required to offer cocoa seeds in payment of the tax they deemed "tribute". Cocoa beans were often used as currency. For example, the Aztecs used a system in which one turkey cost 100 cocoa beans and one fresh avocado was worth three beans.
The Maya and Aztecs associated cocoa with human sacrifice, and chocolate drinks specifically with sacrificial human blood. The Spanish royal chronicler Gonzalo Fernández de Oviedo y Valdés described a chocolate drink he had seen in Nicaragua in 1528, mixed with achiote: "because those people are fond of drinking human blood, to make this beverage seem like blood, they add a little achiote, so that it then turns red. ... and part of that foam is left on the lips and around the mouth, and when it is red for having achiote, it seems a horrific thing, because it seems like blood itself."
Until the 16th century, no European had ever heard of the popular drink from the Central American peoples. Christopher Columbus and his son Ferdinand encountered the cocoa bean on Columbus's fourth mission to the Americas on 15 August 1502, when he and his crew stole a large native canoe that proved to contain cocoa beans among other goods for trade. Spanish conquistador Hernán Cortés may have been the first European to encounter it, as the frothy drink was part of the after-dinner routine of Montezuma. José de Acosta, a Spanish Jesuit missionary who lived in Peru and then Mexico in the later 16th century, wrote of its growing influence on the Spaniards:
Although bananas are more profitable, cocoa is more highly esteemed in Mexico... Cocoa is a smaller fruit than almonds and thicker, which toasted do not taste bad. It is so prized among the Indians and even among Spaniards... because since it is a dried fruit it can be stored for a long time without deterioration, and they brings ships loaded with them from the province of Guatemala... It also serves as currency, because with five cocoas you can buy one thing, with thirty another, and with a hundred something else, without there being contradiction; and they give these cocoas as alms to the poor who beg for them. The principal product of this cocoa is a concoction which they make that they call "chocolate", which is a crazy thing treasured in that land, and those who are not accustomed are disgusted by it, because it has a foam on top and a bubbling like that of feces, which certainly takes a lot to put up with. Anyway, it is the prized beverage which the Indians offer to nobles who come to or pass through their lands; and the Spaniards, especially Spanish women born in those lands die for black chocolate. This aforementioned chocolate is said to be made in various forms and temperaments, hot, cold, and lukewarm. They are wont to use spices and much chili; they also make it into a paste, and it is said that it is a medicine to treat coughs, the stomach, and colds. Whatever may be the case, in fact those who have not been reared on this opinion are not appetized by it.
While Columbus had taken cocoa beans with him back to Spain, chocolate made no impact until Spanish friars introduced it to the Spanish court. After the Spanish conquest of the Aztecs, chocolate was imported to Europe. There, it quickly became a court favorite. It was still served as a beverage, but the Spanish added sugar, as well as honey (the original sweetener used by the Aztecs for chocolate), to counteract the natural bitterness. Vanilla, another indigenous American introduction, was also a popular additive, with pepper and other spices sometimes used to give the illusion of a more potent vanilla flavor. Unfortunately, these spices tended to unsettle the European constitution; the Encyclopédie states, "The pleasant scent and sublime taste it imparts to chocolate have made it highly recommended; but a long experience having shown that it could potentially upset one's stomach", which is why chocolate without vanilla was sometimes referred to as "healthy chocolate". By 1602, chocolate had made its way from Spain to Austria. By 1662, Pope Alexander VII had declared that religious fasts were not broken by consuming chocolate drinks. Within about a hundred years, chocolate established a foothold throughout Europe.
The new craze for chocolate brought with it a thriving slave market, as between the early 1600s and late 1800s, the laborious and slow processing of the cocoa bean was manual. Cocoa plantations spread, as the English, Dutch, and French colonized and planted. With the depletion of Mesoamerican workers, largely to disease, cocoa production was often the work of poor wage laborers and African slaves. Wind-powered and horse-drawn mills were used to speed production, augmenting human labor. Heating the working areas of the table-mill, an innovation that emerged in France in 1732, also assisted in extraction.
In 1729, the first water-powered machinery to grind cocoa beans was developed by Charles Churchman and his son Walter in Bristol, England. In 1761, Joseph Fry and his partner John Vaughan bought Churchman's premises, founding Fry's. The same year, Fry and Vaughan also acquired their own patent for a water-powered machine that could grind the cocoa beans to a fine powder and thus produce a superior cocoa drink. In 1795, chocolate production entered the Industrial era when Fry's, under the founder's son Joseph Storrs Fry, used a Watt steam engine to ground cocoa beans. The Baker Chocolate Company, which makes Baker's Chocolate, is the oldest producer of chocolate in the United States. Founded by Dr. James Baker and John Hannon in Boston in 1765, the business is still in operation.
Despite the drink remaining the traditional form of consumption for a long time, solid chocolate was increasingly consumed since the 18th century. Tablets, facilitating the consumption of chocolate under its solid form, have been produced since the early 19th century. Cailler (1819) and Menier (1836) are early examples. In 1830, chocolate is paired with hazelnuts, an innovation due to Kohler.
Meanwhile, new processes that sped the production of chocolate emerged early in the Industrial Revolution. In 1815, Dutch chemist Coenraad van Houten introduced alkaline salts to chocolate, which reduced its bitterness. A few years thereafter, in 1828, he created a press to remove about half the natural fat (cocoa butter) from chocolate liquor, which made chocolate both cheaper to produce and more consistent in quality. This innovation introduced the modern era of chocolate, allowing the mass-production of both pure cocoa butter and cocoa powder.
Known as "Dutch cocoa", this machine-pressed chocolate was instrumental in the transformation of chocolate to its solid form when, in 1847, English chocolatier Joseph Fry discovered a way to make chocolate more easily moldable when he mixed the ingredients of cocoa powder and sugar with melted cocoa butter. Subsequently, in 1866 his chocolate factory, Fry's, launched the first mass-produced chocolate bar, Fry's Chocolate Cream, and they became very popular. Milk had sometimes been used as an addition to chocolate beverages since the mid-17th century, but in 1875 Swiss chocolatier Daniel Peter invented milk chocolate by mixing a powdered milk developed by Henri Nestlé with the liquor. In 1879, the texture and taste of chocolate was further improved when Rudolphe Lindt invented the conching machine.
Besides Nestlé, several notable chocolate companies had their start in the late 19th and early 20th centuries. Rowntree's of York set up and began producing chocolate in 1862, after buying out the Tuke family business. Cadbury of Birmingham was manufacturing boxed chocolates in England by 1868. Manufacturing their first Easter egg in 1875, Cadbury created the modern chocolate Easter egg after developing a pure cocoa butter that could easily be molded into smooth shapes. In 1893, Milton S. Hershey purchased chocolate processing equipment at the World's Columbian Exposition in Chicago, and soon began the career of Hershey's chocolates with chocolate-coated caramels.
Cocoa, pronounced by the Olmecs as kakawa, dates to 1000 BC or earlier. The word "chocolate" entered the English language from Spanish in about 1600. The word entered Spanish from the word chocolātl in Nahuatl, the language of the Aztecs. The origin of the Nahuatl word is uncertain, as it does not appear in any early Nahuatl source, where the word for chocolate drink is cacahuatl, "cocoa water". It is possible that the Spaniards coined the word (perhaps in order to avoid caca, a vulgar Spanish word for "faeces") by combining the Yucatec Mayan word chocol, "hot", with the Nahuatl word atl, "water". A widely cited proposal is that the derives from unattested xocolatl meaning "bitter drink" is unsupported; the change from x- to ch- is unexplained, as is the -l-. Another proposed etymology derives it from the word chicolatl, meaning "beaten drink", which may derive from the word for the frothing stick, chicoli. Other scholars reject all these proposals, considering the origin of first element of the name to be unknown. The term "chocolatier", for a chocolate confection maker, is attested from 1888.
Several types of chocolate can be distinguished. Pure, unsweetened chocolate, often called "baking chocolate", contains primarily cocoa solids and cocoa butter in varying proportions. Much of the chocolate consumed today is in the form of sweet chocolate, which combines chocolate with sugar.
The traditional types of chocolate are dark, milk and white. All of them contain cocoa butter, which is the ingredient defining the physical properties of chocolate (consistency and melting temperature). Plain (or dark) chocolate, as it name suggests, is a form of chocolate that is similar to pure cocoa liquor, although is usually made with a slightly higher proportion of cocoa butter. It is simply defined by its cocoa percentage. In milk chocolate, the non-fat cocoa solids are partly or mostly replaced by milk solids. In white chocolate, they are all replaced by milk solids, hence its ivory color.
Other forms of eating chocolate exist, these include raw chocolate (made with unroasted beans) and ruby chocolate. An additional popular form of eating chocolate, gianduja, is made by incorporating nut paste (typically hazelnut) to the chocolate paste.
Other types of chocolate are used in baking and confectionery. These include baking chocolate (often unsweetened), couverture chocolate (used for coating), compound chocolate (a lower-cost alternative) and modeling chocolate. Modeling chocolate is a chocolate paste made by melting chocolate and combining it with corn syrup, glucose syrup, or golden syrup.
Roughly two-thirds of the entire world's cocoa is produced in West Africa, with 43% sourced from Côte d'Ivoire, where, as of 2007, child labor is a common practice to obtain the product. According to the World Cocoa Foundation, in 2007 some 50 million people around the world depended on cocoa as a source of livelihood. As of 2007 in the UK, most chocolatiers purchase their chocolate from them, to melt, mold and package to their own design. According to the WCF's 2012 report, the Ivory Coast is the largest producer of cocoa in the world. The two main jobs associated with creating chocolate candy are chocolate makers and chocolatiers. Chocolate makers use harvested cocoa beans and other ingredients to produce couverture chocolate (covering). Chocolatiers use the finished couverture to make chocolate candies (bars, truffles, etc.).
Production costs can be decreased by reducing cocoa solids content or by substituting cocoa butter with another fat. Cocoa growers object to allowing the resulting food to be called "chocolate", due to the risk of lower demand for their crops.
The sequencing in 2010 of the genome of the cacao tree may allow yields to be improved. Due to concerns about global warming effects on lowland climate in the narrow band of latitudes where cocoa is grown (20 degrees north and south of the equator), the commercial company Mars, Incorporated and the University of California, Berkeley, are conducting genomic research in 2017–18 to improve the survivability of cacao plants in hot climates.
Chocolate is made from cocoa beans, the dried and fermented seeds of the cacao tree (Theobroma cacao), a small, 4–8 m tall (15–26 ft tall) evergreen tree native to the deep tropical region of the Americas. Recent genetic studies suggest the most common genotype of the plant originated in the Amazon basin and was gradually transported by humans throughout South and Central America. Early forms of another genotype have also been found in what is now Venezuela. The scientific name, Theobroma, means "food of the gods". The fruit, called a cocoa pod, is ovoid, 15–30 cm (6–12 in) long and 8–10 cm (3–4 in) wide, ripening yellow to orange, and weighing about 500 g (1.1 lb) when ripe.
Cacao trees are small, understory trees that need rich, well-drained soils. They naturally grow within 20° of either side of the equator because they need about 2000 mm of rainfall a year, and temperatures in the range of 21 to 32 °C (70 to 90 °F). Cacao trees cannot tolerate a temperature lower than 15 °C (59 °F).
The three main varieties of cocoa beans used in chocolate are criollo, forastero, and trinitario.
Cocoa pods are harvested by cutting them from the tree using a machete, or by knocking them off the tree using a stick. It is important to harvest the pods when they are fully ripe, because if the pod is unripe, the beans will have a low cocoa butter content, or low sugar content, reducing the ultimate flavor.
The beans (which are sterile within their pods) and their surrounding pulp are removed from the pods and placed in piles or bins to ferment. Micro-organisms, present naturally in the environment, ferment the pectin-containing material. Yeasts produce ethanol, lactic acid bacteria produce lactic acid, and acetic acid bacteria produce acetic acid. In some cocoa-producing regions an association between filamentous fungi and bacteria (called "cocobiota") acts to produce metabolites beneficial to human health when consumed. The fermentation process, which takes up to seven days, also produces several flavor precursors, that eventually provide the chocolate taste.
After fermentation, the beans must be dried to prevent mold growth. Climate and weather permitting, this is done by spreading the beans out in the sun from five to seven days. In some growing regions (for example, Tobago), the dried beans are then polished for sale by "dancing the cocoa": spreading the beans onto a floor, adding oil or water, and shuffling the beans against each other using bare feet.
The dried beans are then transported to a chocolate manufacturing facility. The beans are cleaned (removing twigs, stones, and other debris), roasted, and graded. Next, the shell of each bean is removed to extract the nib. The nibs are ground and liquefied, resulting in pure chocolate liquor. The liquor can be further processed into cocoa solids and cocoa butter.
The beans are dried without fermentation. The nibs are removed and hydrated in an acidic solution. Then they are heated for 72 hours and dried again. Gas chromatography/mass spectrometry showed that the incubated chocolate had higher levels of Strecker aldehydes, and lower levels of pyrazines.
Chocolate liquor is blended with the cocoa butter in varying quantities to make different types of chocolate or couverture. The basic blends of ingredients for the various types of chocolate (in order of highest quantity of cocoa liquor first), are:
Usually, an emulsifying agent, such as soy lecithin, is added, though a few manufacturers prefer to exclude this ingredient for purity reasons and to remain GMO-free, sometimes at the cost of a perfectly smooth texture. Some manufacturers are now using PGPR, an artificial emulsifier derived from castor oil that allows them to reduce the amount of cocoa butter while maintaining the same mouthfeel.
The texture is also heavily influenced by processing, specifically conching (see below). The more expensive chocolate tends to be processed longer and thus has a smoother texture and mouthfeel, regardless of whether emulsifying agents are added.
Different manufacturers develop their own "signature" blends based on the above formulas, but varying proportions of the different constituents are used. The finest, plain dark chocolate couverture contains at least 70% cocoa (both solids and butter), whereas milk chocolate usually contains up to 50%. High-quality white chocolate couverture contains only about 35% cocoa butter.
Producers of high-quality, small-batch chocolate argue that mass production produces bad-quality chocolate. Some mass-produced chocolate contains much less cocoa (as low as 7% in many cases), and fats other than cocoa butter. Vegetable oils and artificial vanilla flavor are often used in cheaper chocolate to mask poorly fermented and/or roasted beans.
In 2007, the Chocolate Manufacturers Association in the United States, whose members include Hershey, Nestlé, and Archer Daniels Midland, lobbied the Food and Drug Administration (FDA) to change the legal definition of chocolate to let them substitute partially hydrogenated vegetable oils for cocoa butter, in addition to using artificial sweeteners and milk substitutes. Currently, the FDA does not allow a product to be referred to as "chocolate" if the product contains any of these ingredients.
In the EU a product can be sold as chocolate if it contains up to 5% vegetable oil, and must be labeled as "family milk chocolate" rather than "milk chocolate" if it contains 20% milk.
According to Canadian Food and Drug Regulations, a "chocolate product" is a food product that is sourced from at least one "cocoa product" and contains at least one of the following: "chocolate, bittersweet chocolate, semi-sweet chocolate, dark chocolate, sweet chocolate, milk chocolate, or white chocolate". A "cocoa product" is defined as a food product that is sourced from cocoa beans and contains "cocoa nibs, cocoa liquor, cocoa mass, unsweetened chocolate, bitter chocolate, chocolate liquor, cocoa, low-fat cocoa, cocoa powder, or low-fat cocoa powder".
The penultimate process is called conching. A conche is a container filled with metal beads, which act as grinders. The refined and blended chocolate mass is kept in a liquid state by frictional heat. Chocolate before conching has an uneven and gritty texture. The conching process produces cocoa and sugar particles smaller than the tongue can detect (typically around 20 μm) and reduces rough edges, hence the smooth feel in the mouth. The length of the conching process determines the final smoothness and quality of the chocolate. High-quality chocolate is conched for about 72 hours, and lesser grades about four to six hours. After the process is complete, the chocolate mass is stored in tanks heated to about 45 to 50 °C (113 to 122 °F) until final processing.
The final process is called tempering. Uncontrolled crystallization of cocoa butter typically results in crystals of varying size, some or all large enough to be seen with the naked eye. This causes the surface of the chocolate to appear mottled and matte, and causes the chocolate to crumble rather than snap when broken. The uniform sheen and crisp bite of properly processed chocolate are the results of consistently small cocoa butter crystals produced by the tempering process.
The fats in cocoa butter can crystallize in six different forms (polymorphous crystallization). The primary purpose of tempering is to assure that only the best form, Type V, is present. The six different crystal forms have different properties.
As a solid piece of chocolate, the cocoa butter fat particles are in a crystalline rigid structure that gives the chocolate its solid appearance. Once heated, the crystals of the polymorphic cocoa butter can break apart from the rigid structure and allow the chocolate to obtain a more fluid consistency as the temperature increases – the melting process. When the heat is removed, the cocoa butter crystals become rigid again and come closer together, allowing the chocolate to solidify.
The temperature in which the crystals obtain enough energy to break apart from their rigid conformation would depend on the milk fat content in the chocolate and the shape of the fat molecules, as well as the form of the cocoa butterfat. Chocolate with a higher fat content will melt at a lower temperature.
Making chocolate considered "good" is about forming as many type V crystals as possible. This provides the best appearance and texture and creates the most stable crystals, so the texture and appearance will not degrade over time. To accomplish this, the temperature is carefully manipulated during the crystallization.
Generally, the chocolate is first heated to 45 °C (113 °F) to melt all six forms of crystals. Next, the chocolate is cooled to about 27 °C (81 °F), which will allow crystal types IV and V to form. At this temperature, the chocolate is agitated to create many small crystal "seeds" which will serve as nuclei to create small crystals in the chocolate. The chocolate is then heated to about 31 °C (88 °F) to eliminate any type IV crystals, leaving just type V. After this point, any excessive heating of the chocolate will destroy the temper and this process will have to be repeated. Other methods of chocolate tempering are used as well. The most common variant is introducing already tempered, solid "seed" chocolate. The temper of chocolate can be measured with a chocolate temper meter to ensure accuracy and consistency. A sample cup is filled with the chocolate and placed in the unit which then displays or prints the results.
Two classic ways of manually tempering chocolate are:
Chocolate tempering machines (or temperers) with computer controls can be used for producing consistently tempered chocolate. In particular, continuous tempering machines are used in large volume applications. Various methods and apparatuses for continuous flow tempering. In general, molten chocolate coming in at 40–50 °C is cooled in heat exchangers to crystallization temperates of about 26–30 °C, passed through a tempering column consisting of spinning plates to induce shear, then warmed slightly to re-melt undesirable crystal formations.
Chocolate is molded in different shapes for different uses:
Chocolate is very sensitive to temperature and humidity. Ideal storage temperatures are between 15 and 17 °C (59 and 63 °F), with a relative humidity of less than 50%. If refrigerated or frozen without containment, chocolate can absorb enough moisture to cause a whitish discoloration, the result of fat or sugar crystals rising to the surface. Various types of "blooming" effects can occur if chocolate is stored or served improperly.
Chocolate bloom is caused by storage temperature fluctuating or exceeding 24 °C (75 °F), while sugar bloom is caused by temperature below 15 °C (59 °F) or excess humidity. To distinguish between different types of bloom, one can rub the surface of the chocolate lightly, and if the bloom disappears, it is fat bloom. Moving chocolate between temperature extremes, can result in an oily texture. Although visually unappealing, chocolate suffering from bloom is safe for consumption and taste unaffected. Bloom can be reversed by retempering the chocolate or using it for any use that requires melting the chocolate.
Chocolate is generally stored away from other foods, as it can absorb different aromas. Ideally, chocolates are packed or wrapped, and placed in proper storage with the correct humidity and temperature. Additionally, chocolate is frequently stored in a dark place or protected from light by wrapping paper. The glossy shine, snap, aroma, texture, and taste of the chocolate can show the quality and if it was stored well.
One hundred grams of milk chocolate supplies 540 calories. It is 59% carbohydrates (52% as sugar and 3% as dietary fiber), 30% fat and 8% protein (table). Approximately 65% of the fat in milk chocolate is saturated, mainly palmitic acid and stearic acid, while the predominant unsaturated fat is oleic acid (table).
100-grams of milk chocolate is an excellent source (over 19% of the Daily Value, DV) of riboflavin, vitamin B12 and the dietary minerals, manganese, phosphorus and zinc. Chocolate is a good source (10–19% DV) of calcium, magnesium and iron.
Chocolate contains polyphenols, especially flavan-3-ols (catechins) and smaller amounts of other flavonoids. It also contains alkaloids, such as theobromine, phenethylamine, and caffeine. which are under study for their potential effects in the body.
Although research suggests that even low levels of lead in the body may be harmful to children, it is unlikely that chocolate consumption in small amounts causes lead poisoning. Some studies have shown that lead may bind to cocoa shells, and contamination may occur during the manufacturing process. One study showed the mean lead level in milk chocolate candy bars was 0.027 µg lead per gram of candy; another study found that some chocolate purchased at U.S. supermarkets contained up to 0.965 µg per gram, close to the international (voluntary) standard limit for lead in cocoa powder or beans, which is 1 µg of lead per gram. In 2006, the U.S. FDA lowered by one-fifth the amount of lead permissible in candy, but compliance is only voluntary. Studies concluded that "children, who are big consumers of chocolates, may be at risk of exceeding the daily limit of lead; whereas one 10 g cube of dark chocolate may contain as much as 20% of the daily lead oral limit. "Moreover chocolate may not be the only source of lead in their nutrition" and "chocolate might be a significant source of cadmium and lead ingestion, particularly for children." According to a 2005 study, the average lead concentration of cocoa beans is ≤ 0.5 ng/g, which is one of the lowest reported values for a natural food. However, during cultivation and production, chocolate may absorb lead from the environment (such as in atmospheric emissions of now unused leaded gasoline).
The European Food Safety Authority recommended a tolerable weekly intake for cadmium of 2.5 micrograms per kg of body weight for Europeans, indicating that consuming chocolate products caused exposure of about 4% among all foods eaten. 1986 California Proposition 65 requires a warning label on chocolate products having more than 4.1 mg of cadmium per daily serving of a single product.
One tablespoonful (5 grams) of dry unsweetened cocoa powder has 12.1 mg of caffeine and a 25-g single serving of dark chocolate has 22.4 mg of caffeine. Although a single 7 oz. (200 ml) serving of coffee may contain 80–175 mg, studies have shown psychoactive effects in caffeine doses as low as 9 mg, and a dose as low as 12.5 mg was shown to have effects on cognitive performance.
Chocolate may be a factor for heartburn in some people because one of its constituents, theobromine, may affect the esophageal sphincter muscle in a way that permits stomach acids to enter the esophagus. Theobromine poisoning is an overdosage reaction to the bitter alkaloid, which happens more frequently in domestic animals than humans. However, daily intake of 50–100 g cocoa (0.8–1.5 g theobromine) by humans has been associated with sweating, trembling, and severe headache.
Chocolate and cocoa contain moderate to high amounts of oxalate, which may increase the risk of kidney stones.
In sufficient amounts, the theobromine found in chocolate is toxic to animals such as cats, dogs, horses, parrots, and small rodents because they are unable to metabolise the chemical effectively. If animals are fed chocolate, the theobromine may remain in the circulation for up to 20 hours, possibly causing epileptic seizures, heart attacks, internal bleeding, and eventually death. Medical treatment performed by a veterinarian involves inducing vomiting within two hours of ingestion and administration of benzodiazepines or barbiturates for seizures, antiarrhythmics for heart arrhythmias, and fluid diuresis.
A typical 20-kilogram (44 lb) dog will normally experience great intestinal distress after eating less than 240 grams (8.5 oz) of dark chocolate, but will not necessarily experience bradycardia or tachycardia unless it eats at least a half a kilogram (1.1 lb) of milk chocolate. Dark chocolate has 2 to 5 times more theobromine and thus is more dangerous to dogs. According to the Merck Veterinary Manual, approximately 1.3 grams of baker's chocolate per kilogram of a dog's body weight (0.02 oz/lb) is sufficient to cause symptoms of toxicity. For example, a typical 25-gram (0.88 oz) baker's chocolate bar would be enough to bring about symptoms in a 20-kilogram (44 lb) dog. In the 20th century, there were reports that mulch made from cacao bean shells is dangerous to dogs and livestock.
Commonly consumed chocolate is high in fat and sugar, which are associated with an increased risk for obesity when chocolate is consumed in excess.
Overall evidence is insufficient to determine the relationship between chocolate consumption and acne. Various studies point not to chocolate, but to the high glycemic nature of certain foods, like sugar, corn syrup, and other simple carbohydrates, as potential causes of acne, along with other possible dietary factors.
Food, including chocolate, is not typically viewed as addictive. Some people, however, may want or crave chocolate, leading to a self-described term, chocoholic.
By some popular myths, chocolate is considered to be a mood enhancer, such as by increasing sex drive or stimulating cognition, but there is little scientific evidence that such effects are consistent among all chocolate consumers. If mood improvement from eating chocolate occurs, there is not enough research to indicate whether it results from the favorable flavor or from the stimulant effects of its constituents, such as caffeine, theobromine, or their parent molecule, methylxanthine. A 2019 review reported that chocolate consumption does not improve depressive mood.
Reviews support a short-term effect of lowering blood pressure by consuming cocoa products, but there is no evidence of long-term cardiovascular health benefit. Chocolate and cocoa are under preliminary research to determine if consumption affects the risk of certain cardiovascular diseases or cognitive abilities.
While daily consumption of cocoa flavanols (minimum dose of 200 mg) appears to benefit platelet and vascular function, there is no good evidence to indicate an effect on heart attacks or strokes. Research has also shown that consuming dark chocolate does not substantially affect blood pressure.
Some manufacturers provide the percentage of chocolate in a finished chocolate confection as a label quoting percentage of "cocoa" or "cacao". This refers to the combined percentage of both cocoa solids and cocoa butter in the bar, not just the percentage of cocoa solids. The Belgian AMBAO certification mark indicates that no non-cocoa vegetable fats have been used in making the chocolate. A long-standing dispute between Britain on the one hand and Belgium and France over British use of vegetable fats in chocolate ended in 2000 with the adoption of new standards which permitted the use of up to five percent vegetable fats in clearly labelled products. This British style of chocolate has sometimes been pejoratively referred to as "vegelate".
Chocolates that are organic or fair trade certified carry labels accordingly.
In the United States, some large chocolate manufacturers lobbied the federal government to permit confections containing cheaper hydrogenated vegetable oil in place of cocoa butter to be sold as "chocolate". In June 2007, in response to consumer concern about the proposal, the FDA reiterated "Cacao fat, as one of the signature characteristics of the product, will remain a principal component of standardized chocolate."
Chocolate, prevalent throughout the world, is a steadily growing, US$50 billion-a-year worldwide business. Europe accounts for 45% of the world's chocolate revenue, and the US spent $20 billion in 2013. Big Chocolate is the grouping of major international chocolate companies in Europe and the U.S. U.S. companies Mars and Hershey's alone generated $13 billion a year in chocolate sales and account for two-thirds of U.S. production in 2004. Despite the expanding reach of the chocolate industry internationally, cocoa farmers and labourers in the Ivory Coast are often unaware of the uses of the beans; the high cost of chocolate products in the Ivory Coast makes them inaccessible to the majority of the population, who do not know what chocolate tastes like.
Chocolate manufacturers produce a range of products from chocolate bars to fudge. Large manufacturers of chocolate products include Cadbury (the world's largest confectionery manufacturer), Ferrero, Guylian, The Hershey Company, Lindt & Sprüngli, Mars, Incorporated, Milka, Neuhaus and Suchard.
Guylian is best known for its chocolate sea shells; Cadbury for its Dairy Milk and Creme Egg. The Hershey Company, the largest chocolate manufacturer in North America, produces the Hershey Bar and Hershey's Kisses. Mars Incorporated, a large privately owned U.S. corporation, produces Mars Bar, Milky Way, M&M's, Twix, and Snickers. Lindt is known for its truffle balls and gold foil-wrapped Easter bunnies.
Food conglomerates Nestlé SA and Kraft Foods both have chocolate brands. Nestlé acquired Rowntree's in 1988 and now markets chocolates under their brand, including Smarties (a chocolate candy) and Kit Kat (a chocolate bar); Kraft Foods through its 1990 acquisition of Jacobs Suchard, now owns Milka and Suchard. In February 2010, Kraft also acquired British-based Cadbury; Fry's, Trebor Basset and the fair trade brand Green & Black's also belongs to the group.
The widespread use of children in cocoa production is controversial, not only for the concerns about child labor and exploitation, but also because up to 12,000 of the 200,000 children working in the Ivory Coast, the world's biggest producer of cocoa, may be victims of trafficking or slavery. Most attention on this subject has focused on West Africa, which collectively supplies 69 percent of the world's cocoa, and the Ivory Coast in particular, which supplies 35 percent of the world's cocoa. Thirty percent of children under age 15 in sub-Saharan Africa are child laborers, mostly in agricultural activities including cocoa farming. Major chocolate producers, such as Nestlé, buy cocoa at commodities exchanges where Ivorian cocoa is mixed with other cocoa.
In 2009, Salvation Army International Development (SAID) UK stated that 12,000 children have been trafficked on cocoa farms in the Ivory Coast of Africa, where half of the world's chocolate is made. SAID UK states that it is these child slaves who are likely to be working in "harsh and abusive" conditions for the production of chocolate, and an increasing number of health-food and anti-slavery organisations are highlighting and campaigning against the use of trafficking in the chocolate industry.
As of 2017, approximately 2.1 million children in Ghana and Côte d'Ivoire were involved in farming cocoa, carrying heavy loads, clearing forests, and being exposed to pesticides. According to Sona Ebai, the former secretary-general of the Alliance of Cocoa Producing Countries: "I think child labor cannot be just the responsibility of industry to solve. I think it's the proverbial all-hands-on-deck: government, civil society, the private sector. And there, you need leadership." Reported in 2018, a 3-year pilot program – conducted by Nestlé with 26,000 farmers mostly located in Côte d'Ivoire – observed a 51% decrease in the number of children doing hazardous jobs in cocoa farming. The US Department of Labor formed the Child Labor Cocoa Coordinating Group as a public-private partnership with the governments of Ghana and Côte d'Ivoire to address child labor practices in the cocoa industry. The International Cocoa Initiative involving major cocoa manufacturers established the Child Labor Monitoring and Remediation System intended to monitor thousands of farms in Ghana and Côte d'Ivoire for child labor conditions, but the program reached less than 20% of the child laborers. Despite these efforts, goals to reduce child labor in West Africa by 70% before 2020 are frustrated by persistent poverty, absence of schools, expansion of cocoa farmland, and increased demand for cocoa.
In April 2018, the Cocoa Barometer report stated: "Not a single company or government is anywhere near reaching the sector-wide objective of the elimination of child labor, and not even near their commitments of a 70% reduction of child labor by 2020".
In the 2000s, some chocolate producers began to engage in fair trade initiatives, to address concerns about the marginalization of cocoa laborers in developing countries. Traditionally, Africa and other developing countries received low prices for their exported commodities such as cocoa, which caused poverty to abound. Fairtrade seeks to establish a system of direct trade from developing countries to counteract this unfair system. One solution for fair labor practices is for farmers to become part of an Agricultural cooperative. Cooperatives pay farmers a fair price for their cocoa so farmers have enough money for food, clothes, and school fees. One of the main tenets of fair trade is that farmers receive a fair price, but this does not mean that the larger amount of money paid for fair trade cocoa goes directly to the farmers. The effectiveness of fair trade has been questioned. In a 2014 article, The Economist stated that workers on fair trade farms have a lower standard of living than on similar farms outside the fair trade system.
Chocolate is sold in chocolate bars, which come in dark chocolate, milk chocolate and white chocolate varieties. Some bars that are mostly chocolate have other ingredients blended into the chocolate, such as nuts, raisins, or crisped rice. Chocolate is used as an ingredient in a huge variety of bars, which typically contain various confectionary ingredients (e.g., nougat, wafers, caramel, nuts, etc.) which are coated in chocolate.
Chocolate is used as a flavouring product in many desserts, such as chocolate cakes, chocolate brownies, chocolate mousse and chocolate chip cookies. Numerous types of candy and snacks contain chocolate, either as a filling (e.g., M&M's) or as a coating (e.g., chocolate-coated raisins or chocolate-coated peanuts).
Some non-alcoholic beverages contain chocolate, such as chocolate milk, hot chocolate, chocolate milkshakes and tejate. Some alcoholic liqueurs are flavoured with chocolate, such as chocolate liqueur and creme de cacao. Chocolate is a popular flavour of ice cream and pudding, and chocolate sauce is a commonly added as a topping on ice cream sundaes. The caffè mocha is an espresso beverage containing chocolate.
Chocolate is associated with festivals such as Easter, when moulded chocolate rabbits and eggs are traditionally given in Christian communities, and Hanukkah, when chocolate coins are given in Jewish communities. Chocolate hearts and chocolate in heart-shaped boxes are popular on Valentine's Day and are often presented along with flowers and a greeting card. In 1868, Cadbury created a decorated box of chocolates in the shape of a heart for Valentine's Day. Boxes of filled chocolates quickly became associated with the holiday. Chocolate is an acceptable gift on other holidays and on occasions such as birthdays.
Many confectioners make holiday-specific chocolate candies. Chocolate Easter eggs or rabbits and Santa Claus figures are two examples. Such confections can be solid, hollow, or filled with sweets or fondant.
Chocolate has been the center of several successful book and film adaptations. In 1964, Roald Dahl published a children's novel titled Charlie and the Chocolate Factory. The novel centers on a poor boy named Charlie Bucket who takes a tour through the greatest chocolate factory in the world, owned by the eccentric Willy Wonka. Two film adaptations of the novel were produced: Willy Wonka & the Chocolate Factory (1971) and Charlie and the Chocolate Factory (2005). A third adaptation, an origin prequel film titled Wonka, is scheduled for release in 2023.
Like Water for Chocolate a 1989 love story by novelist Laura Esquivel, was adapted to film in 1992. Chocolat, a 1999 novel by Joanne Harris, was adapted for film in Chocolat which was released a year later. | [
{
"paragraph_id": 0,
"text": "Chocolate or cocoa is a food made from roasted and ground cacao seed kernels that is available as a liquid, solid, or paste, either on its own or as a flavoring agent in other foods. Cacao has been consumed in some form since at least the Olmec civilization (19th–11th century BCE), and later Mesoamerican civilizations also consumed chocolate beverages before being introduced to Europe in the 16th century.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The seeds of the cacao tree have an intense bitter taste and must be fermented to develop the flavor. After fermentation, the seeds are dried, cleaned, and roasted. The shell is removed to produce cocoa nibs, which are then ground to cocoa mass, unadulterated chocolate in rough form. Once the cocoa mass is liquefied by heating, it is called chocolate liquor. The liquor may also be cooled and processed into its two components: cocoa solids and cocoa butter. Baking chocolate, also called bitter chocolate, contains cocoa solids and cocoa butter in varying proportions without any added sugar. Powdered baking cocoa, which contains more fiber than cocoa butter, can be processed with alkali to produce Dutch cocoa. Much of the chocolate consumed today is in the form of sweet chocolate, a combination of cocoa solids, cocoa butter, or added vegetable oils and sugar. Milk chocolate is sweet chocolate that additionally contains milk powder or condensed milk. White chocolate contains cocoa butter, sugar, and milk, but no cocoa solids.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Chocolate is one of the most popular food types and flavors in the world, and many foodstuffs involving chocolate exist, particularly desserts, including cakes, pudding, mousse, chocolate brownies, and chocolate chip cookies. Many candies are filled with or coated with sweetened chocolate. Chocolate bars, either made of solid chocolate or other ingredients coated in chocolate, are eaten as snacks. Gifts of chocolate molded into different shapes (such as eggs, hearts, and coins) are traditional on certain Western holidays, including Christmas, Easter, Valentine's Day, and Hanukkah. Chocolate is also used in cold and hot beverages, such as chocolate milk and hot chocolate, and in some alcoholic drinks, such as creme de cacao.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Although cocoa originated in the Americas, West African countries, particularly Côte d'Ivoire and Ghana, are the leading producers of cocoa in the 21st century, accounting for some 60% of the world cocoa supply.",
"title": ""
},
{
"paragraph_id": 4,
"text": "With some two million children involved in the farming of cocoa in West Africa, child slavery and trafficking associated with the cocoa trade remain major concerns. A 2018 report argued that international attempts to improve conditions for children were doomed to failure because of persistent poverty, the absence of schools, increasing world cocoa demand, more intensive farming of cocoa, and continued exploitation of child labor.",
"title": ""
},
{
"paragraph_id": 5,
"text": "Chocolate has been prepared as a drink for nearly all of its history. For example, one vessel found at an Olmec archaeological site on the Gulf Coast of Veracruz, Mexico, dates chocolate's preparation by pre-Olmec peoples as early as 1750 BC. On the Pacific coast of Chiapas, Mexico, a Mokaya archaeological site provides evidence of cocoa beverages dating even earlier to 1900 BC. The residues and the kind of vessel in which they were found indicate the initial use of cocoa was not simply as a beverage; the white pulp around the cocoa beans was likely used as a source of fermentable sugars for an alcoholic drink.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "An early Classic-period (460–480 AD) Maya tomb from the site in Rio Azul had vessels with the Maya glyph for cocoa on them with residue of a chocolate drink, which suggests that the Maya were drinking chocolate around 400 AD. Documents in Maya hieroglyphs stated that chocolate was used for ceremonial purposes in addition to everyday life. The Maya grew cacao trees in their backyards and used the cocoa seeds the trees produced to make a frothy, bitter drink.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "By the 15th century, the Aztecs had gained control of a large part of Mesoamerica and had adopted cocoa into their culture. They associated chocolate with Quetzalcoatl, who, according to one legend, was cast away by the other gods for sharing chocolate with humans, and identified its extrication from the pod with the removal of the human heart in sacrifice. In contrast to the Maya, who liked their chocolate warm, the Aztecs drank it cold, seasoning it with a broad variety of additives, including the petals of the Cymbopetalum penduliflorum tree, chili pepper, allspice, vanilla, and honey.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "The Aztecs were unable to grow cocoa themselves, as their home in the Mexican highlands was unsuitable for it, so chocolate was a luxury imported into the empire. Those who lived in areas ruled by the Aztecs were required to offer cocoa seeds in payment of the tax they deemed \"tribute\". Cocoa beans were often used as currency. For example, the Aztecs used a system in which one turkey cost 100 cocoa beans and one fresh avocado was worth three beans.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "The Maya and Aztecs associated cocoa with human sacrifice, and chocolate drinks specifically with sacrificial human blood. The Spanish royal chronicler Gonzalo Fernández de Oviedo y Valdés described a chocolate drink he had seen in Nicaragua in 1528, mixed with achiote: \"because those people are fond of drinking human blood, to make this beverage seem like blood, they add a little achiote, so that it then turns red. ... and part of that foam is left on the lips and around the mouth, and when it is red for having achiote, it seems a horrific thing, because it seems like blood itself.\"",
"title": "History"
},
{
"paragraph_id": 10,
"text": "Until the 16th century, no European had ever heard of the popular drink from the Central American peoples. Christopher Columbus and his son Ferdinand encountered the cocoa bean on Columbus's fourth mission to the Americas on 15 August 1502, when he and his crew stole a large native canoe that proved to contain cocoa beans among other goods for trade. Spanish conquistador Hernán Cortés may have been the first European to encounter it, as the frothy drink was part of the after-dinner routine of Montezuma. José de Acosta, a Spanish Jesuit missionary who lived in Peru and then Mexico in the later 16th century, wrote of its growing influence on the Spaniards:",
"title": "History"
},
{
"paragraph_id": 11,
"text": "Although bananas are more profitable, cocoa is more highly esteemed in Mexico... Cocoa is a smaller fruit than almonds and thicker, which toasted do not taste bad. It is so prized among the Indians and even among Spaniards... because since it is a dried fruit it can be stored for a long time without deterioration, and they brings ships loaded with them from the province of Guatemala... It also serves as currency, because with five cocoas you can buy one thing, with thirty another, and with a hundred something else, without there being contradiction; and they give these cocoas as alms to the poor who beg for them. The principal product of this cocoa is a concoction which they make that they call \"chocolate\", which is a crazy thing treasured in that land, and those who are not accustomed are disgusted by it, because it has a foam on top and a bubbling like that of feces, which certainly takes a lot to put up with. Anyway, it is the prized beverage which the Indians offer to nobles who come to or pass through their lands; and the Spaniards, especially Spanish women born in those lands die for black chocolate. This aforementioned chocolate is said to be made in various forms and temperaments, hot, cold, and lukewarm. They are wont to use spices and much chili; they also make it into a paste, and it is said that it is a medicine to treat coughs, the stomach, and colds. Whatever may be the case, in fact those who have not been reared on this opinion are not appetized by it.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "While Columbus had taken cocoa beans with him back to Spain, chocolate made no impact until Spanish friars introduced it to the Spanish court. After the Spanish conquest of the Aztecs, chocolate was imported to Europe. There, it quickly became a court favorite. It was still served as a beverage, but the Spanish added sugar, as well as honey (the original sweetener used by the Aztecs for chocolate), to counteract the natural bitterness. Vanilla, another indigenous American introduction, was also a popular additive, with pepper and other spices sometimes used to give the illusion of a more potent vanilla flavor. Unfortunately, these spices tended to unsettle the European constitution; the Encyclopédie states, \"The pleasant scent and sublime taste it imparts to chocolate have made it highly recommended; but a long experience having shown that it could potentially upset one's stomach\", which is why chocolate without vanilla was sometimes referred to as \"healthy chocolate\". By 1602, chocolate had made its way from Spain to Austria. By 1662, Pope Alexander VII had declared that religious fasts were not broken by consuming chocolate drinks. Within about a hundred years, chocolate established a foothold throughout Europe.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "The new craze for chocolate brought with it a thriving slave market, as between the early 1600s and late 1800s, the laborious and slow processing of the cocoa bean was manual. Cocoa plantations spread, as the English, Dutch, and French colonized and planted. With the depletion of Mesoamerican workers, largely to disease, cocoa production was often the work of poor wage laborers and African slaves. Wind-powered and horse-drawn mills were used to speed production, augmenting human labor. Heating the working areas of the table-mill, an innovation that emerged in France in 1732, also assisted in extraction.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "In 1729, the first water-powered machinery to grind cocoa beans was developed by Charles Churchman and his son Walter in Bristol, England. In 1761, Joseph Fry and his partner John Vaughan bought Churchman's premises, founding Fry's. The same year, Fry and Vaughan also acquired their own patent for a water-powered machine that could grind the cocoa beans to a fine powder and thus produce a superior cocoa drink. In 1795, chocolate production entered the Industrial era when Fry's, under the founder's son Joseph Storrs Fry, used a Watt steam engine to ground cocoa beans. The Baker Chocolate Company, which makes Baker's Chocolate, is the oldest producer of chocolate in the United States. Founded by Dr. James Baker and John Hannon in Boston in 1765, the business is still in operation.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "Despite the drink remaining the traditional form of consumption for a long time, solid chocolate was increasingly consumed since the 18th century. Tablets, facilitating the consumption of chocolate under its solid form, have been produced since the early 19th century. Cailler (1819) and Menier (1836) are early examples. In 1830, chocolate is paired with hazelnuts, an innovation due to Kohler.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "Meanwhile, new processes that sped the production of chocolate emerged early in the Industrial Revolution. In 1815, Dutch chemist Coenraad van Houten introduced alkaline salts to chocolate, which reduced its bitterness. A few years thereafter, in 1828, he created a press to remove about half the natural fat (cocoa butter) from chocolate liquor, which made chocolate both cheaper to produce and more consistent in quality. This innovation introduced the modern era of chocolate, allowing the mass-production of both pure cocoa butter and cocoa powder.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "Known as \"Dutch cocoa\", this machine-pressed chocolate was instrumental in the transformation of chocolate to its solid form when, in 1847, English chocolatier Joseph Fry discovered a way to make chocolate more easily moldable when he mixed the ingredients of cocoa powder and sugar with melted cocoa butter. Subsequently, in 1866 his chocolate factory, Fry's, launched the first mass-produced chocolate bar, Fry's Chocolate Cream, and they became very popular. Milk had sometimes been used as an addition to chocolate beverages since the mid-17th century, but in 1875 Swiss chocolatier Daniel Peter invented milk chocolate by mixing a powdered milk developed by Henri Nestlé with the liquor. In 1879, the texture and taste of chocolate was further improved when Rudolphe Lindt invented the conching machine.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "Besides Nestlé, several notable chocolate companies had their start in the late 19th and early 20th centuries. Rowntree's of York set up and began producing chocolate in 1862, after buying out the Tuke family business. Cadbury of Birmingham was manufacturing boxed chocolates in England by 1868. Manufacturing their first Easter egg in 1875, Cadbury created the modern chocolate Easter egg after developing a pure cocoa butter that could easily be molded into smooth shapes. In 1893, Milton S. Hershey purchased chocolate processing equipment at the World's Columbian Exposition in Chicago, and soon began the career of Hershey's chocolates with chocolate-coated caramels.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "Cocoa, pronounced by the Olmecs as kakawa, dates to 1000 BC or earlier. The word \"chocolate\" entered the English language from Spanish in about 1600. The word entered Spanish from the word chocolātl in Nahuatl, the language of the Aztecs. The origin of the Nahuatl word is uncertain, as it does not appear in any early Nahuatl source, where the word for chocolate drink is cacahuatl, \"cocoa water\". It is possible that the Spaniards coined the word (perhaps in order to avoid caca, a vulgar Spanish word for \"faeces\") by combining the Yucatec Mayan word chocol, \"hot\", with the Nahuatl word atl, \"water\". A widely cited proposal is that the derives from unattested xocolatl meaning \"bitter drink\" is unsupported; the change from x- to ch- is unexplained, as is the -l-. Another proposed etymology derives it from the word chicolatl, meaning \"beaten drink\", which may derive from the word for the frothing stick, chicoli. Other scholars reject all these proposals, considering the origin of first element of the name to be unknown. The term \"chocolatier\", for a chocolate confection maker, is attested from 1888.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "Several types of chocolate can be distinguished. Pure, unsweetened chocolate, often called \"baking chocolate\", contains primarily cocoa solids and cocoa butter in varying proportions. Much of the chocolate consumed today is in the form of sweet chocolate, which combines chocolate with sugar.",
"title": "Types"
},
{
"paragraph_id": 21,
"text": "The traditional types of chocolate are dark, milk and white. All of them contain cocoa butter, which is the ingredient defining the physical properties of chocolate (consistency and melting temperature). Plain (or dark) chocolate, as it name suggests, is a form of chocolate that is similar to pure cocoa liquor, although is usually made with a slightly higher proportion of cocoa butter. It is simply defined by its cocoa percentage. In milk chocolate, the non-fat cocoa solids are partly or mostly replaced by milk solids. In white chocolate, they are all replaced by milk solids, hence its ivory color.",
"title": "Types"
},
{
"paragraph_id": 22,
"text": "Other forms of eating chocolate exist, these include raw chocolate (made with unroasted beans) and ruby chocolate. An additional popular form of eating chocolate, gianduja, is made by incorporating nut paste (typically hazelnut) to the chocolate paste.",
"title": "Types"
},
{
"paragraph_id": 23,
"text": "Other types of chocolate are used in baking and confectionery. These include baking chocolate (often unsweetened), couverture chocolate (used for coating), compound chocolate (a lower-cost alternative) and modeling chocolate. Modeling chocolate is a chocolate paste made by melting chocolate and combining it with corn syrup, glucose syrup, or golden syrup.",
"title": "Types"
},
{
"paragraph_id": 24,
"text": "Roughly two-thirds of the entire world's cocoa is produced in West Africa, with 43% sourced from Côte d'Ivoire, where, as of 2007, child labor is a common practice to obtain the product. According to the World Cocoa Foundation, in 2007 some 50 million people around the world depended on cocoa as a source of livelihood. As of 2007 in the UK, most chocolatiers purchase their chocolate from them, to melt, mold and package to their own design. According to the WCF's 2012 report, the Ivory Coast is the largest producer of cocoa in the world. The two main jobs associated with creating chocolate candy are chocolate makers and chocolatiers. Chocolate makers use harvested cocoa beans and other ingredients to produce couverture chocolate (covering). Chocolatiers use the finished couverture to make chocolate candies (bars, truffles, etc.).",
"title": "Production"
},
{
"paragraph_id": 25,
"text": "Production costs can be decreased by reducing cocoa solids content or by substituting cocoa butter with another fat. Cocoa growers object to allowing the resulting food to be called \"chocolate\", due to the risk of lower demand for their crops.",
"title": "Production"
},
{
"paragraph_id": 26,
"text": "The sequencing in 2010 of the genome of the cacao tree may allow yields to be improved. Due to concerns about global warming effects on lowland climate in the narrow band of latitudes where cocoa is grown (20 degrees north and south of the equator), the commercial company Mars, Incorporated and the University of California, Berkeley, are conducting genomic research in 2017–18 to improve the survivability of cacao plants in hot climates.",
"title": "Production"
},
{
"paragraph_id": 27,
"text": "Chocolate is made from cocoa beans, the dried and fermented seeds of the cacao tree (Theobroma cacao), a small, 4–8 m tall (15–26 ft tall) evergreen tree native to the deep tropical region of the Americas. Recent genetic studies suggest the most common genotype of the plant originated in the Amazon basin and was gradually transported by humans throughout South and Central America. Early forms of another genotype have also been found in what is now Venezuela. The scientific name, Theobroma, means \"food of the gods\". The fruit, called a cocoa pod, is ovoid, 15–30 cm (6–12 in) long and 8–10 cm (3–4 in) wide, ripening yellow to orange, and weighing about 500 g (1.1 lb) when ripe.",
"title": "Production"
},
{
"paragraph_id": 28,
"text": "Cacao trees are small, understory trees that need rich, well-drained soils. They naturally grow within 20° of either side of the equator because they need about 2000 mm of rainfall a year, and temperatures in the range of 21 to 32 °C (70 to 90 °F). Cacao trees cannot tolerate a temperature lower than 15 °C (59 °F).",
"title": "Production"
},
{
"paragraph_id": 29,
"text": "The three main varieties of cocoa beans used in chocolate are criollo, forastero, and trinitario.",
"title": "Production"
},
{
"paragraph_id": 30,
"text": "Cocoa pods are harvested by cutting them from the tree using a machete, or by knocking them off the tree using a stick. It is important to harvest the pods when they are fully ripe, because if the pod is unripe, the beans will have a low cocoa butter content, or low sugar content, reducing the ultimate flavor.",
"title": "Production"
},
{
"paragraph_id": 31,
"text": "The beans (which are sterile within their pods) and their surrounding pulp are removed from the pods and placed in piles or bins to ferment. Micro-organisms, present naturally in the environment, ferment the pectin-containing material. Yeasts produce ethanol, lactic acid bacteria produce lactic acid, and acetic acid bacteria produce acetic acid. In some cocoa-producing regions an association between filamentous fungi and bacteria (called \"cocobiota\") acts to produce metabolites beneficial to human health when consumed. The fermentation process, which takes up to seven days, also produces several flavor precursors, that eventually provide the chocolate taste.",
"title": "Production"
},
{
"paragraph_id": 32,
"text": "After fermentation, the beans must be dried to prevent mold growth. Climate and weather permitting, this is done by spreading the beans out in the sun from five to seven days. In some growing regions (for example, Tobago), the dried beans are then polished for sale by \"dancing the cocoa\": spreading the beans onto a floor, adding oil or water, and shuffling the beans against each other using bare feet.",
"title": "Production"
},
{
"paragraph_id": 33,
"text": "The dried beans are then transported to a chocolate manufacturing facility. The beans are cleaned (removing twigs, stones, and other debris), roasted, and graded. Next, the shell of each bean is removed to extract the nib. The nibs are ground and liquefied, resulting in pure chocolate liquor. The liquor can be further processed into cocoa solids and cocoa butter.",
"title": "Production"
},
{
"paragraph_id": 34,
"text": "The beans are dried without fermentation. The nibs are removed and hydrated in an acidic solution. Then they are heated for 72 hours and dried again. Gas chromatography/mass spectrometry showed that the incubated chocolate had higher levels of Strecker aldehydes, and lower levels of pyrazines.",
"title": "Production"
},
{
"paragraph_id": 35,
"text": "Chocolate liquor is blended with the cocoa butter in varying quantities to make different types of chocolate or couverture. The basic blends of ingredients for the various types of chocolate (in order of highest quantity of cocoa liquor first), are:",
"title": "Production"
},
{
"paragraph_id": 36,
"text": "Usually, an emulsifying agent, such as soy lecithin, is added, though a few manufacturers prefer to exclude this ingredient for purity reasons and to remain GMO-free, sometimes at the cost of a perfectly smooth texture. Some manufacturers are now using PGPR, an artificial emulsifier derived from castor oil that allows them to reduce the amount of cocoa butter while maintaining the same mouthfeel.",
"title": "Production"
},
{
"paragraph_id": 37,
"text": "The texture is also heavily influenced by processing, specifically conching (see below). The more expensive chocolate tends to be processed longer and thus has a smoother texture and mouthfeel, regardless of whether emulsifying agents are added.",
"title": "Production"
},
{
"paragraph_id": 38,
"text": "Different manufacturers develop their own \"signature\" blends based on the above formulas, but varying proportions of the different constituents are used. The finest, plain dark chocolate couverture contains at least 70% cocoa (both solids and butter), whereas milk chocolate usually contains up to 50%. High-quality white chocolate couverture contains only about 35% cocoa butter.",
"title": "Production"
},
{
"paragraph_id": 39,
"text": "Producers of high-quality, small-batch chocolate argue that mass production produces bad-quality chocolate. Some mass-produced chocolate contains much less cocoa (as low as 7% in many cases), and fats other than cocoa butter. Vegetable oils and artificial vanilla flavor are often used in cheaper chocolate to mask poorly fermented and/or roasted beans.",
"title": "Production"
},
{
"paragraph_id": 40,
"text": "In 2007, the Chocolate Manufacturers Association in the United States, whose members include Hershey, Nestlé, and Archer Daniels Midland, lobbied the Food and Drug Administration (FDA) to change the legal definition of chocolate to let them substitute partially hydrogenated vegetable oils for cocoa butter, in addition to using artificial sweeteners and milk substitutes. Currently, the FDA does not allow a product to be referred to as \"chocolate\" if the product contains any of these ingredients.",
"title": "Production"
},
{
"paragraph_id": 41,
"text": "In the EU a product can be sold as chocolate if it contains up to 5% vegetable oil, and must be labeled as \"family milk chocolate\" rather than \"milk chocolate\" if it contains 20% milk.",
"title": "Production"
},
{
"paragraph_id": 42,
"text": "According to Canadian Food and Drug Regulations, a \"chocolate product\" is a food product that is sourced from at least one \"cocoa product\" and contains at least one of the following: \"chocolate, bittersweet chocolate, semi-sweet chocolate, dark chocolate, sweet chocolate, milk chocolate, or white chocolate\". A \"cocoa product\" is defined as a food product that is sourced from cocoa beans and contains \"cocoa nibs, cocoa liquor, cocoa mass, unsweetened chocolate, bitter chocolate, chocolate liquor, cocoa, low-fat cocoa, cocoa powder, or low-fat cocoa powder\".",
"title": "Production"
},
{
"paragraph_id": 43,
"text": "The penultimate process is called conching. A conche is a container filled with metal beads, which act as grinders. The refined and blended chocolate mass is kept in a liquid state by frictional heat. Chocolate before conching has an uneven and gritty texture. The conching process produces cocoa and sugar particles smaller than the tongue can detect (typically around 20 μm) and reduces rough edges, hence the smooth feel in the mouth. The length of the conching process determines the final smoothness and quality of the chocolate. High-quality chocolate is conched for about 72 hours, and lesser grades about four to six hours. After the process is complete, the chocolate mass is stored in tanks heated to about 45 to 50 °C (113 to 122 °F) until final processing.",
"title": "Production"
},
{
"paragraph_id": 44,
"text": "The final process is called tempering. Uncontrolled crystallization of cocoa butter typically results in crystals of varying size, some or all large enough to be seen with the naked eye. This causes the surface of the chocolate to appear mottled and matte, and causes the chocolate to crumble rather than snap when broken. The uniform sheen and crisp bite of properly processed chocolate are the results of consistently small cocoa butter crystals produced by the tempering process.",
"title": "Production"
},
{
"paragraph_id": 45,
"text": "The fats in cocoa butter can crystallize in six different forms (polymorphous crystallization). The primary purpose of tempering is to assure that only the best form, Type V, is present. The six different crystal forms have different properties.",
"title": "Production"
},
{
"paragraph_id": 46,
"text": "As a solid piece of chocolate, the cocoa butter fat particles are in a crystalline rigid structure that gives the chocolate its solid appearance. Once heated, the crystals of the polymorphic cocoa butter can break apart from the rigid structure and allow the chocolate to obtain a more fluid consistency as the temperature increases – the melting process. When the heat is removed, the cocoa butter crystals become rigid again and come closer together, allowing the chocolate to solidify.",
"title": "Production"
},
{
"paragraph_id": 47,
"text": "The temperature in which the crystals obtain enough energy to break apart from their rigid conformation would depend on the milk fat content in the chocolate and the shape of the fat molecules, as well as the form of the cocoa butterfat. Chocolate with a higher fat content will melt at a lower temperature.",
"title": "Production"
},
{
"paragraph_id": 48,
"text": "Making chocolate considered \"good\" is about forming as many type V crystals as possible. This provides the best appearance and texture and creates the most stable crystals, so the texture and appearance will not degrade over time. To accomplish this, the temperature is carefully manipulated during the crystallization.",
"title": "Production"
},
{
"paragraph_id": 49,
"text": "Generally, the chocolate is first heated to 45 °C (113 °F) to melt all six forms of crystals. Next, the chocolate is cooled to about 27 °C (81 °F), which will allow crystal types IV and V to form. At this temperature, the chocolate is agitated to create many small crystal \"seeds\" which will serve as nuclei to create small crystals in the chocolate. The chocolate is then heated to about 31 °C (88 °F) to eliminate any type IV crystals, leaving just type V. After this point, any excessive heating of the chocolate will destroy the temper and this process will have to be repeated. Other methods of chocolate tempering are used as well. The most common variant is introducing already tempered, solid \"seed\" chocolate. The temper of chocolate can be measured with a chocolate temper meter to ensure accuracy and consistency. A sample cup is filled with the chocolate and placed in the unit which then displays or prints the results.",
"title": "Production"
},
{
"paragraph_id": 50,
"text": "Two classic ways of manually tempering chocolate are:",
"title": "Production"
},
{
"paragraph_id": 51,
"text": "Chocolate tempering machines (or temperers) with computer controls can be used for producing consistently tempered chocolate. In particular, continuous tempering machines are used in large volume applications. Various methods and apparatuses for continuous flow tempering. In general, molten chocolate coming in at 40–50 °C is cooled in heat exchangers to crystallization temperates of about 26–30 °C, passed through a tempering column consisting of spinning plates to induce shear, then warmed slightly to re-melt undesirable crystal formations.",
"title": "Production"
},
{
"paragraph_id": 52,
"text": "Chocolate is molded in different shapes for different uses:",
"title": "Production"
},
{
"paragraph_id": 53,
"text": "Chocolate is very sensitive to temperature and humidity. Ideal storage temperatures are between 15 and 17 °C (59 and 63 °F), with a relative humidity of less than 50%. If refrigerated or frozen without containment, chocolate can absorb enough moisture to cause a whitish discoloration, the result of fat or sugar crystals rising to the surface. Various types of \"blooming\" effects can occur if chocolate is stored or served improperly.",
"title": "Production"
},
{
"paragraph_id": 54,
"text": "Chocolate bloom is caused by storage temperature fluctuating or exceeding 24 °C (75 °F), while sugar bloom is caused by temperature below 15 °C (59 °F) or excess humidity. To distinguish between different types of bloom, one can rub the surface of the chocolate lightly, and if the bloom disappears, it is fat bloom. Moving chocolate between temperature extremes, can result in an oily texture. Although visually unappealing, chocolate suffering from bloom is safe for consumption and taste unaffected. Bloom can be reversed by retempering the chocolate or using it for any use that requires melting the chocolate.",
"title": "Production"
},
{
"paragraph_id": 55,
"text": "Chocolate is generally stored away from other foods, as it can absorb different aromas. Ideally, chocolates are packed or wrapped, and placed in proper storage with the correct humidity and temperature. Additionally, chocolate is frequently stored in a dark place or protected from light by wrapping paper. The glossy shine, snap, aroma, texture, and taste of the chocolate can show the quality and if it was stored well.",
"title": "Production"
},
{
"paragraph_id": 56,
"text": "One hundred grams of milk chocolate supplies 540 calories. It is 59% carbohydrates (52% as sugar and 3% as dietary fiber), 30% fat and 8% protein (table). Approximately 65% of the fat in milk chocolate is saturated, mainly palmitic acid and stearic acid, while the predominant unsaturated fat is oleic acid (table).",
"title": "Nutrition"
},
{
"paragraph_id": 57,
"text": "100-grams of milk chocolate is an excellent source (over 19% of the Daily Value, DV) of riboflavin, vitamin B12 and the dietary minerals, manganese, phosphorus and zinc. Chocolate is a good source (10–19% DV) of calcium, magnesium and iron.",
"title": "Nutrition"
},
{
"paragraph_id": 58,
"text": "Chocolate contains polyphenols, especially flavan-3-ols (catechins) and smaller amounts of other flavonoids. It also contains alkaloids, such as theobromine, phenethylamine, and caffeine. which are under study for their potential effects in the body.",
"title": "Health effects"
},
{
"paragraph_id": 59,
"text": "Although research suggests that even low levels of lead in the body may be harmful to children, it is unlikely that chocolate consumption in small amounts causes lead poisoning. Some studies have shown that lead may bind to cocoa shells, and contamination may occur during the manufacturing process. One study showed the mean lead level in milk chocolate candy bars was 0.027 µg lead per gram of candy; another study found that some chocolate purchased at U.S. supermarkets contained up to 0.965 µg per gram, close to the international (voluntary) standard limit for lead in cocoa powder or beans, which is 1 µg of lead per gram. In 2006, the U.S. FDA lowered by one-fifth the amount of lead permissible in candy, but compliance is only voluntary. Studies concluded that \"children, who are big consumers of chocolates, may be at risk of exceeding the daily limit of lead; whereas one 10 g cube of dark chocolate may contain as much as 20% of the daily lead oral limit. \"Moreover chocolate may not be the only source of lead in their nutrition\" and \"chocolate might be a significant source of cadmium and lead ingestion, particularly for children.\" According to a 2005 study, the average lead concentration of cocoa beans is ≤ 0.5 ng/g, which is one of the lowest reported values for a natural food. However, during cultivation and production, chocolate may absorb lead from the environment (such as in atmospheric emissions of now unused leaded gasoline).",
"title": "Health effects"
},
{
"paragraph_id": 60,
"text": "The European Food Safety Authority recommended a tolerable weekly intake for cadmium of 2.5 micrograms per kg of body weight for Europeans, indicating that consuming chocolate products caused exposure of about 4% among all foods eaten. 1986 California Proposition 65 requires a warning label on chocolate products having more than 4.1 mg of cadmium per daily serving of a single product.",
"title": "Health effects"
},
{
"paragraph_id": 61,
"text": "One tablespoonful (5 grams) of dry unsweetened cocoa powder has 12.1 mg of caffeine and a 25-g single serving of dark chocolate has 22.4 mg of caffeine. Although a single 7 oz. (200 ml) serving of coffee may contain 80–175 mg, studies have shown psychoactive effects in caffeine doses as low as 9 mg, and a dose as low as 12.5 mg was shown to have effects on cognitive performance.",
"title": "Health effects"
},
{
"paragraph_id": 62,
"text": "Chocolate may be a factor for heartburn in some people because one of its constituents, theobromine, may affect the esophageal sphincter muscle in a way that permits stomach acids to enter the esophagus. Theobromine poisoning is an overdosage reaction to the bitter alkaloid, which happens more frequently in domestic animals than humans. However, daily intake of 50–100 g cocoa (0.8–1.5 g theobromine) by humans has been associated with sweating, trembling, and severe headache.",
"title": "Health effects"
},
{
"paragraph_id": 63,
"text": "Chocolate and cocoa contain moderate to high amounts of oxalate, which may increase the risk of kidney stones.",
"title": "Health effects"
},
{
"paragraph_id": 64,
"text": "In sufficient amounts, the theobromine found in chocolate is toxic to animals such as cats, dogs, horses, parrots, and small rodents because they are unable to metabolise the chemical effectively. If animals are fed chocolate, the theobromine may remain in the circulation for up to 20 hours, possibly causing epileptic seizures, heart attacks, internal bleeding, and eventually death. Medical treatment performed by a veterinarian involves inducing vomiting within two hours of ingestion and administration of benzodiazepines or barbiturates for seizures, antiarrhythmics for heart arrhythmias, and fluid diuresis.",
"title": "Health effects"
},
{
"paragraph_id": 65,
"text": "A typical 20-kilogram (44 lb) dog will normally experience great intestinal distress after eating less than 240 grams (8.5 oz) of dark chocolate, but will not necessarily experience bradycardia or tachycardia unless it eats at least a half a kilogram (1.1 lb) of milk chocolate. Dark chocolate has 2 to 5 times more theobromine and thus is more dangerous to dogs. According to the Merck Veterinary Manual, approximately 1.3 grams of baker's chocolate per kilogram of a dog's body weight (0.02 oz/lb) is sufficient to cause symptoms of toxicity. For example, a typical 25-gram (0.88 oz) baker's chocolate bar would be enough to bring about symptoms in a 20-kilogram (44 lb) dog. In the 20th century, there were reports that mulch made from cacao bean shells is dangerous to dogs and livestock.",
"title": "Health effects"
},
{
"paragraph_id": 66,
"text": "Commonly consumed chocolate is high in fat and sugar, which are associated with an increased risk for obesity when chocolate is consumed in excess.",
"title": "Research"
},
{
"paragraph_id": 67,
"text": "Overall evidence is insufficient to determine the relationship between chocolate consumption and acne. Various studies point not to chocolate, but to the high glycemic nature of certain foods, like sugar, corn syrup, and other simple carbohydrates, as potential causes of acne, along with other possible dietary factors.",
"title": "Research"
},
{
"paragraph_id": 68,
"text": "Food, including chocolate, is not typically viewed as addictive. Some people, however, may want or crave chocolate, leading to a self-described term, chocoholic.",
"title": "Research"
},
{
"paragraph_id": 69,
"text": "By some popular myths, chocolate is considered to be a mood enhancer, such as by increasing sex drive or stimulating cognition, but there is little scientific evidence that such effects are consistent among all chocolate consumers. If mood improvement from eating chocolate occurs, there is not enough research to indicate whether it results from the favorable flavor or from the stimulant effects of its constituents, such as caffeine, theobromine, or their parent molecule, methylxanthine. A 2019 review reported that chocolate consumption does not improve depressive mood.",
"title": "Research"
},
{
"paragraph_id": 70,
"text": "Reviews support a short-term effect of lowering blood pressure by consuming cocoa products, but there is no evidence of long-term cardiovascular health benefit. Chocolate and cocoa are under preliminary research to determine if consumption affects the risk of certain cardiovascular diseases or cognitive abilities.",
"title": "Research"
},
{
"paragraph_id": 71,
"text": "While daily consumption of cocoa flavanols (minimum dose of 200 mg) appears to benefit platelet and vascular function, there is no good evidence to indicate an effect on heart attacks or strokes. Research has also shown that consuming dark chocolate does not substantially affect blood pressure.",
"title": "Research"
},
{
"paragraph_id": 72,
"text": "Some manufacturers provide the percentage of chocolate in a finished chocolate confection as a label quoting percentage of \"cocoa\" or \"cacao\". This refers to the combined percentage of both cocoa solids and cocoa butter in the bar, not just the percentage of cocoa solids. The Belgian AMBAO certification mark indicates that no non-cocoa vegetable fats have been used in making the chocolate. A long-standing dispute between Britain on the one hand and Belgium and France over British use of vegetable fats in chocolate ended in 2000 with the adoption of new standards which permitted the use of up to five percent vegetable fats in clearly labelled products. This British style of chocolate has sometimes been pejoratively referred to as \"vegelate\".",
"title": "Labeling"
},
{
"paragraph_id": 73,
"text": "Chocolates that are organic or fair trade certified carry labels accordingly.",
"title": "Labeling"
},
{
"paragraph_id": 74,
"text": "In the United States, some large chocolate manufacturers lobbied the federal government to permit confections containing cheaper hydrogenated vegetable oil in place of cocoa butter to be sold as \"chocolate\". In June 2007, in response to consumer concern about the proposal, the FDA reiterated \"Cacao fat, as one of the signature characteristics of the product, will remain a principal component of standardized chocolate.\"",
"title": "Labeling"
},
{
"paragraph_id": 75,
"text": "Chocolate, prevalent throughout the world, is a steadily growing, US$50 billion-a-year worldwide business. Europe accounts for 45% of the world's chocolate revenue, and the US spent $20 billion in 2013. Big Chocolate is the grouping of major international chocolate companies in Europe and the U.S. U.S. companies Mars and Hershey's alone generated $13 billion a year in chocolate sales and account for two-thirds of U.S. production in 2004. Despite the expanding reach of the chocolate industry internationally, cocoa farmers and labourers in the Ivory Coast are often unaware of the uses of the beans; the high cost of chocolate products in the Ivory Coast makes them inaccessible to the majority of the population, who do not know what chocolate tastes like.",
"title": "Industry"
},
{
"paragraph_id": 76,
"text": "Chocolate manufacturers produce a range of products from chocolate bars to fudge. Large manufacturers of chocolate products include Cadbury (the world's largest confectionery manufacturer), Ferrero, Guylian, The Hershey Company, Lindt & Sprüngli, Mars, Incorporated, Milka, Neuhaus and Suchard.",
"title": "Industry"
},
{
"paragraph_id": 77,
"text": "Guylian is best known for its chocolate sea shells; Cadbury for its Dairy Milk and Creme Egg. The Hershey Company, the largest chocolate manufacturer in North America, produces the Hershey Bar and Hershey's Kisses. Mars Incorporated, a large privately owned U.S. corporation, produces Mars Bar, Milky Way, M&M's, Twix, and Snickers. Lindt is known for its truffle balls and gold foil-wrapped Easter bunnies.",
"title": "Industry"
},
{
"paragraph_id": 78,
"text": "Food conglomerates Nestlé SA and Kraft Foods both have chocolate brands. Nestlé acquired Rowntree's in 1988 and now markets chocolates under their brand, including Smarties (a chocolate candy) and Kit Kat (a chocolate bar); Kraft Foods through its 1990 acquisition of Jacobs Suchard, now owns Milka and Suchard. In February 2010, Kraft also acquired British-based Cadbury; Fry's, Trebor Basset and the fair trade brand Green & Black's also belongs to the group.",
"title": "Industry"
},
{
"paragraph_id": 79,
"text": "The widespread use of children in cocoa production is controversial, not only for the concerns about child labor and exploitation, but also because up to 12,000 of the 200,000 children working in the Ivory Coast, the world's biggest producer of cocoa, may be victims of trafficking or slavery. Most attention on this subject has focused on West Africa, which collectively supplies 69 percent of the world's cocoa, and the Ivory Coast in particular, which supplies 35 percent of the world's cocoa. Thirty percent of children under age 15 in sub-Saharan Africa are child laborers, mostly in agricultural activities including cocoa farming. Major chocolate producers, such as Nestlé, buy cocoa at commodities exchanges where Ivorian cocoa is mixed with other cocoa.",
"title": "Industry"
},
{
"paragraph_id": 80,
"text": "In 2009, Salvation Army International Development (SAID) UK stated that 12,000 children have been trafficked on cocoa farms in the Ivory Coast of Africa, where half of the world's chocolate is made. SAID UK states that it is these child slaves who are likely to be working in \"harsh and abusive\" conditions for the production of chocolate, and an increasing number of health-food and anti-slavery organisations are highlighting and campaigning against the use of trafficking in the chocolate industry.",
"title": "Industry"
},
{
"paragraph_id": 81,
"text": "As of 2017, approximately 2.1 million children in Ghana and Côte d'Ivoire were involved in farming cocoa, carrying heavy loads, clearing forests, and being exposed to pesticides. According to Sona Ebai, the former secretary-general of the Alliance of Cocoa Producing Countries: \"I think child labor cannot be just the responsibility of industry to solve. I think it's the proverbial all-hands-on-deck: government, civil society, the private sector. And there, you need leadership.\" Reported in 2018, a 3-year pilot program – conducted by Nestlé with 26,000 farmers mostly located in Côte d'Ivoire – observed a 51% decrease in the number of children doing hazardous jobs in cocoa farming. The US Department of Labor formed the Child Labor Cocoa Coordinating Group as a public-private partnership with the governments of Ghana and Côte d'Ivoire to address child labor practices in the cocoa industry. The International Cocoa Initiative involving major cocoa manufacturers established the Child Labor Monitoring and Remediation System intended to monitor thousands of farms in Ghana and Côte d'Ivoire for child labor conditions, but the program reached less than 20% of the child laborers. Despite these efforts, goals to reduce child labor in West Africa by 70% before 2020 are frustrated by persistent poverty, absence of schools, expansion of cocoa farmland, and increased demand for cocoa.",
"title": "Industry"
},
{
"paragraph_id": 82,
"text": "In April 2018, the Cocoa Barometer report stated: \"Not a single company or government is anywhere near reaching the sector-wide objective of the elimination of child labor, and not even near their commitments of a 70% reduction of child labor by 2020\".",
"title": "Industry"
},
{
"paragraph_id": 83,
"text": "In the 2000s, some chocolate producers began to engage in fair trade initiatives, to address concerns about the marginalization of cocoa laborers in developing countries. Traditionally, Africa and other developing countries received low prices for their exported commodities such as cocoa, which caused poverty to abound. Fairtrade seeks to establish a system of direct trade from developing countries to counteract this unfair system. One solution for fair labor practices is for farmers to become part of an Agricultural cooperative. Cooperatives pay farmers a fair price for their cocoa so farmers have enough money for food, clothes, and school fees. One of the main tenets of fair trade is that farmers receive a fair price, but this does not mean that the larger amount of money paid for fair trade cocoa goes directly to the farmers. The effectiveness of fair trade has been questioned. In a 2014 article, The Economist stated that workers on fair trade farms have a lower standard of living than on similar farms outside the fair trade system.",
"title": "Industry"
},
{
"paragraph_id": 84,
"text": "Chocolate is sold in chocolate bars, which come in dark chocolate, milk chocolate and white chocolate varieties. Some bars that are mostly chocolate have other ingredients blended into the chocolate, such as nuts, raisins, or crisped rice. Chocolate is used as an ingredient in a huge variety of bars, which typically contain various confectionary ingredients (e.g., nougat, wafers, caramel, nuts, etc.) which are coated in chocolate.",
"title": "Usage and consumption"
},
{
"paragraph_id": 85,
"text": "Chocolate is used as a flavouring product in many desserts, such as chocolate cakes, chocolate brownies, chocolate mousse and chocolate chip cookies. Numerous types of candy and snacks contain chocolate, either as a filling (e.g., M&M's) or as a coating (e.g., chocolate-coated raisins or chocolate-coated peanuts).",
"title": "Usage and consumption"
},
{
"paragraph_id": 86,
"text": "Some non-alcoholic beverages contain chocolate, such as chocolate milk, hot chocolate, chocolate milkshakes and tejate. Some alcoholic liqueurs are flavoured with chocolate, such as chocolate liqueur and creme de cacao. Chocolate is a popular flavour of ice cream and pudding, and chocolate sauce is a commonly added as a topping on ice cream sundaes. The caffè mocha is an espresso beverage containing chocolate.",
"title": "Usage and consumption"
},
{
"paragraph_id": 87,
"text": "Chocolate is associated with festivals such as Easter, when moulded chocolate rabbits and eggs are traditionally given in Christian communities, and Hanukkah, when chocolate coins are given in Jewish communities. Chocolate hearts and chocolate in heart-shaped boxes are popular on Valentine's Day and are often presented along with flowers and a greeting card. In 1868, Cadbury created a decorated box of chocolates in the shape of a heart for Valentine's Day. Boxes of filled chocolates quickly became associated with the holiday. Chocolate is an acceptable gift on other holidays and on occasions such as birthdays.",
"title": "Popular culture"
},
{
"paragraph_id": 88,
"text": "Many confectioners make holiday-specific chocolate candies. Chocolate Easter eggs or rabbits and Santa Claus figures are two examples. Such confections can be solid, hollow, or filled with sweets or fondant.",
"title": "Popular culture"
},
{
"paragraph_id": 89,
"text": "Chocolate has been the center of several successful book and film adaptations. In 1964, Roald Dahl published a children's novel titled Charlie and the Chocolate Factory. The novel centers on a poor boy named Charlie Bucket who takes a tour through the greatest chocolate factory in the world, owned by the eccentric Willy Wonka. Two film adaptations of the novel were produced: Willy Wonka & the Chocolate Factory (1971) and Charlie and the Chocolate Factory (2005). A third adaptation, an origin prequel film titled Wonka, is scheduled for release in 2023.",
"title": "Popular culture"
},
{
"paragraph_id": 90,
"text": "Like Water for Chocolate a 1989 love story by novelist Laura Esquivel, was adapted to film in 1992. Chocolat, a 1999 novel by Joanne Harris, was adapted for film in Chocolat which was released a year later.",
"title": "Popular culture"
}
] | Chocolate or cocoa is a food made from roasted and ground cacao seed kernels that is available as a liquid, solid, or paste, either on its own or as a flavoring agent in other foods. Cacao has been consumed in some form since at least the Olmec civilization, and later Mesoamerican civilizations also consumed chocolate beverages before being introduced to Europe in the 16th century. The seeds of the cacao tree have an intense bitter taste and must be fermented to develop the flavor. After fermentation, the seeds are dried, cleaned, and roasted. The shell is removed to produce cocoa nibs, which are then ground to cocoa mass, unadulterated chocolate in rough form. Once the cocoa mass is liquefied by heating, it is called chocolate liquor. The liquor may also be cooled and processed into its two components: cocoa solids and cocoa butter. Baking chocolate, also called bitter chocolate, contains cocoa solids and cocoa butter in varying proportions without any added sugar. Powdered baking cocoa, which contains more fiber than cocoa butter, can be processed with alkali to produce Dutch cocoa. Much of the chocolate consumed today is in the form of sweet chocolate, a combination of cocoa solids, cocoa butter, or added vegetable oils and sugar. Milk chocolate is sweet chocolate that additionally contains milk powder or condensed milk. White chocolate contains cocoa butter, sugar, and milk, but no cocoa solids. Chocolate is one of the most popular food types and flavors in the world, and many foodstuffs involving chocolate exist, particularly desserts, including cakes, pudding, mousse, chocolate brownies, and chocolate chip cookies. Many candies are filled with or coated with sweetened chocolate. Chocolate bars, either made of solid chocolate or other ingredients coated in chocolate, are eaten as snacks. Gifts of chocolate molded into different shapes are traditional on certain Western holidays, including Christmas, Easter, Valentine's Day, and Hanukkah. Chocolate is also used in cold and hot beverages, such as chocolate milk and hot chocolate, and in some alcoholic drinks, such as creme de cacao. Although cocoa originated in the Americas, West African countries, particularly Côte d'Ivoire and Ghana, are the leading producers of cocoa in the 21st century, accounting for some 60% of the world cocoa supply. With some two million children involved in the farming of cocoa in West Africa, child slavery and trafficking associated with the cocoa trade remain major concerns. A 2018 report argued that international attempts to improve conditions for children were doomed to failure because of persistent poverty, the absence of schools, increasing world cocoa demand, more intensive farming of cocoa, and continued exploitation of child labor. | 2001-11-13T19:08:07Z | 2023-12-26T16:25:15Z | [
"Template:Wiktionary-inline",
"Template:TOC limit",
"Template:Also",
"Template:Cbignore",
"Template:Wikiquote-inline",
"Template:Chocolate",
"Template:Wikibooks-inline",
"Template:Pp-move-indef",
"Template:Use American English",
"Template:See also",
"Template:Convert",
"Template:Portal",
"Template:Div col",
"Template:Webarchive",
"Template:Dead link",
"Template:Other uses",
"Template:Sfnp",
"Template:As of",
"Template:Wikivoyage-inline",
"Template:OEtymD",
"Template:ISBN",
"Template:Wikisource-inline",
"Template:Pp-semi",
"Template:Infobox food",
"Template:Cite news",
"Template:Cite web",
"Template:Cite book",
"Template:Commons category-inline",
"Template:Reflist",
"Template:Cite journal",
"Template:Authority control",
"Template:Use dmy dates",
"Template:Nutritional value",
"Template:Div col end",
"Template:More references",
"Template:Cite magazine",
"Template:Short description",
"Template:Main",
"Template:Anchor"
] | https://en.wikipedia.org/wiki/Chocolate |
7,100 | Cornet | The cornet (/ˈkɔːrnɪt/, US: /kɔːrˈnɛt/) is a brass instrument similar to the trumpet but distinguished from it by its conical bore, more compact shape, and mellower tone quality. The most common cornet is a transposing instrument in B♭. There is also a soprano cornet in E♭ and cornets in A and C. All are unrelated to the Renaissance and early Baroque cornett.
The cornet was derived from the posthorn by applying rotary valves to it in the 1820s, in France. However, by the 1830s, Parisian makers were using piston valves. Cornets first appeared as separate instrumental parts in 19th-century French compositions.
The instrument could not have been developed without the improvement of piston valves by Silesian horn players Friedrich Blühmel (or Blümel) and Heinrich Stölzel, in the early 19th century. These two instrument makers almost simultaneously invented valves, though it is likely that Blühmel was the inventor, while Stölzel developed a practical instrument. They were jointly granted a patent for a period of ten years. François Périnet received a patent in 1838 for an improved valve, which became the model for modern brass instrument piston valves. The first notable virtuoso player was Jean-Baptiste Arban, who studied the cornet extensively and published La grande méthode complète de cornet à piston et de saxhorn, commonly referred to as the Arban method, in 1864. Up until the early 20th century, the trumpet and cornet co-existed in musical ensembles; symphonic repertoire often involves separate parts for trumpet and cornet. As several instrument builders made improvements to both instruments, they started to look and sound more alike. The modern-day cornet is used in brass bands, concert bands, and in specific orchestral repertoire that requires a more mellow sound.
The name "cornet" derives from the French corne, meaning "horn", itself from Latin cornu. While not musically related, instruments of the Zink family (which includes serpents) are named "cornetto" or "cornett" in modern English, to distinguish them from the valved cornet described here. The 11th edition of the Encyclopædia Britannica referred to serpents as "old wooden cornets". The Roman/Etruscan cornu (or simply "horn") is the lingual ancestor of these. It is a predecessor of the post horn, from which the cornet evolved, and was used like a bugle to signal orders on the battlefield.
The cornet's valves allowed for melodic playing throughout the instrument's register. Trumpets were slower to adopt the new valve technology, so for 100 years or more, composers often wrote separate parts for trumpet and cornet. The trumpet would play fanfare-like passages, while the cornet played more melodic ones. The modern trumpet has valves that allow it to play the same notes and fingerings as the cornet.
Cornets and trumpets made in a given key (usually the key of B♭) play at the same pitch, and the technique for playing the instruments is nearly identical. However, cornets and trumpets are not entirely interchangeable, as they differ in timbre. Also available, but usually seen only in the brass band, is an E♭ soprano model, pitched a fourth above the standard B♭.
Unlike the trumpet, which has a cylindrical bore up to the bell section, the tubing of the cornet has a mostly conical bore, starting very narrow at the mouthpiece and gradually widening towards the bell. Cornets following the 1913 patent of E. A. Couturier can have a continuously conical bore. This shape is primarily responsible for the instrument's characteristic warm, mellow tone, which can be distinguished from the more penetrating sound of the trumpet. The conical bore of the cornet also makes it more agile than the trumpet when playing fast passages, but correct pitching is often less assured. The cornet is often preferred for young beginners as it is easier to hold, with its centre of gravity much closer to the player.
The cornet mouthpiece has a shorter and narrower shank than that of a trumpet, so it can fit the cornet's smaller mouthpiece receiver. The cup size is often deeper than that of a trumpet mouthpiece.
One variety is the short-model traditional cornet, also known as a "Shepherd's Crook" shaped model. These are most often large-bore instruments with a rich mellow sound. There is also a long-model, or "American-wrap" cornet, often with a smaller bore and a brighter sound, which is produced in a variety of different tubing wraps and is closer to a trumpet in appearance. The Shepherd's Crook model is preferred by cornet traditionalists. The long-model cornet is generally used in concert bands in the United States and has found little following in British-style brass and concert bands.
A third, and relatively rare variety—distinct from the "American-wrap" cornet—is the "long cornet", which was produced in the mid-20th century by C. G. Conn and F. E. Olds and is visually nearly indistinguishable from a trumpet, except that it has a receiver fashioned to accept cornet mouthpieces.
The echo cornet has been called an obsolete variant. It has a mute chamber (or echo chamber) mounted to the side, acting as a second bell when the fourth valve is pressed. The second bell has a sound similar to that of a Harmon mute and is typically used to play echo phrases, whereupon the player imitates the sound from the primary bell using the echo chamber.
Like the trumpet and all other modern brass wind instruments, the cornet makes a sound when the player vibrates ("buzzes") the lips in the mouthpiece, creating a vibrating column of air in the tubing. The frequency of the air column's vibration can be modified by changing the lip tension and aperture, or embouchure, and by altering the tongue position to change the shape of the oral cavity, thereby increasing or decreasing the speed of the airstream. In addition, the column of air can be lengthened by engaging one or more valves, thus lowering the pitch. Double and triple tonguing are also possible.
Without valves, the player could produce only a harmonic series of notes, like those played by the bugle and other "natural" brass instruments. These notes are far apart for most of the instrument's range, making diatonic and chromatic playing impossible, except in the extreme high register. The valves change the length of the vibrating column and provide the cornet with the ability to play chromatically.
British brass bands consist only of brass instruments and a percussion section. The cornet is the leading melodic instrument in this ensemble; trumpets are never used. The ensemble consists of about thirty musicians, including nine B♭ cornets and one E♭ cornet (soprano cornet). In the UK, companies such as Besson and Boosey & Hawkes specialized in instruments for brass bands. In America, 19th-century manufacturers such as Graves and Company, Hall and Quinby, E.G. Wright, and the Boston Musical Instrument Manufactury made instruments for this ensemble.
The cornet features in the British-style concert band, and early American concert band pieces, particularly those written or transcribed before 1960, often feature distinct, separate parts for trumpets and cornets. Cornet parts are rarely included in later American pieces, however, and they are replaced in modern American bands by the trumpet. This slight difference in instrumentation derives from the British concert band's heritage in military bands, where the highest brass instrument is always the cornet. There are usually four to six B♭ cornets present in a British concert band, but no E♭ instrument, as this role is taken by the E♭ clarinet.
Fanfareorkesten ("fanfare orchestras"), found in only the Netherlands, Belgium, northern France, and Lithuania, use the complete saxhorn family of instruments. The standard instrumentation includes both the cornet and the trumpet; however, in recent decades, the cornet has largely been replaced by the trumpet.
In old-style jazz bands, the cornet was preferred to the trumpet, but from the swing era onwards, it has been largely replaced by the louder, more piercing trumpet. Likewise, the cornet has been largely phased out of big bands by a growing taste for louder and more aggressive instruments, especially since the advent of bebop in the post-World War II era.
Jazz pioneer Buddy Bolden played the cornet, and Louis Armstrong started off on the instrument, but his switch to the trumpet is often credited with the beginning of the trumpet's dominance in jazz. Cornetists such as Bubber Miley and Rex Stewart contributed substantially to the Duke Ellington Orchestra's early sound. Other influential jazz cornetists include Freddie Keppard, King Oliver, Bix Beiderbecke, Ruby Braff, Bobby Hackett, and Nat Adderley. Notable performances on cornet by players generally associated with the trumpet include Freddie Hubbard's on Empyrean Isles, by Herbie Hancock, and Don Cherry's on The Shape of Jazz to Come, by Ornette Coleman. The band Tuba Skinny is led by cornetist Shaye Cohn.
Soon after its invention, the cornet was introduced into the symphony orchestra, supplementing the trumpets. The use of valves meant they could play a full chromatic scale in contrast with trumpets, which were still restricted to the harmonic series. In addition, their tone was found to unify the horn and trumpet sections. Hector Berlioz was the first significant composer to use them in these ways, and his orchestral works often use pairs of both trumpets and cornets, the latter playing more of the melodic lines. In his Symphonie fantastique (1830), he added a counter-melody for a solo cornet in the second movement (Un Bal).
Cornets continued to be used, particularly in French compositions, well after the valve trumpet was common. They blended well with other instruments and were held to be better suited to certain types of melody. Tchaikovsky used them effectively this way in his Capriccio Italien (1880).
From the early 20th century, the cornet and trumpet combination was still favored by some composers, including Edward Elgar and Igor Stravinsky, but tended to be used for occasions when the composer wanted the specific mellower and more agile sound. The sounds of the cornet and trumpet have grown closer together over time, and the former is now rarely used as an ensemble instrument: in the first version of his ballet Petrushka (1911), Stravinsky gives a celebrated solo to the cornet; in the 1946 revision, he removed cornets from the orchestration and instead assigned the solo to the trumpet. | [
{
"paragraph_id": 0,
"text": "The cornet (/ˈkɔːrnɪt/, US: /kɔːrˈnɛt/) is a brass instrument similar to the trumpet but distinguished from it by its conical bore, more compact shape, and mellower tone quality. The most common cornet is a transposing instrument in B♭. There is also a soprano cornet in E♭ and cornets in A and C. All are unrelated to the Renaissance and early Baroque cornett.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The cornet was derived from the posthorn by applying rotary valves to it in the 1820s, in France. However, by the 1830s, Parisian makers were using piston valves. Cornets first appeared as separate instrumental parts in 19th-century French compositions.",
"title": "History"
},
{
"paragraph_id": 2,
"text": "The instrument could not have been developed without the improvement of piston valves by Silesian horn players Friedrich Blühmel (or Blümel) and Heinrich Stölzel, in the early 19th century. These two instrument makers almost simultaneously invented valves, though it is likely that Blühmel was the inventor, while Stölzel developed a practical instrument. They were jointly granted a patent for a period of ten years. François Périnet received a patent in 1838 for an improved valve, which became the model for modern brass instrument piston valves. The first notable virtuoso player was Jean-Baptiste Arban, who studied the cornet extensively and published La grande méthode complète de cornet à piston et de saxhorn, commonly referred to as the Arban method, in 1864. Up until the early 20th century, the trumpet and cornet co-existed in musical ensembles; symphonic repertoire often involves separate parts for trumpet and cornet. As several instrument builders made improvements to both instruments, they started to look and sound more alike. The modern-day cornet is used in brass bands, concert bands, and in specific orchestral repertoire that requires a more mellow sound.",
"title": "History"
},
{
"paragraph_id": 3,
"text": "The name \"cornet\" derives from the French corne, meaning \"horn\", itself from Latin cornu. While not musically related, instruments of the Zink family (which includes serpents) are named \"cornetto\" or \"cornett\" in modern English, to distinguish them from the valved cornet described here. The 11th edition of the Encyclopædia Britannica referred to serpents as \"old wooden cornets\". The Roman/Etruscan cornu (or simply \"horn\") is the lingual ancestor of these. It is a predecessor of the post horn, from which the cornet evolved, and was used like a bugle to signal orders on the battlefield.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "The cornet's valves allowed for melodic playing throughout the instrument's register. Trumpets were slower to adopt the new valve technology, so for 100 years or more, composers often wrote separate parts for trumpet and cornet. The trumpet would play fanfare-like passages, while the cornet played more melodic ones. The modern trumpet has valves that allow it to play the same notes and fingerings as the cornet.",
"title": "Relationship to trumpet"
},
{
"paragraph_id": 5,
"text": "Cornets and trumpets made in a given key (usually the key of B♭) play at the same pitch, and the technique for playing the instruments is nearly identical. However, cornets and trumpets are not entirely interchangeable, as they differ in timbre. Also available, but usually seen only in the brass band, is an E♭ soprano model, pitched a fourth above the standard B♭.",
"title": "Relationship to trumpet"
},
{
"paragraph_id": 6,
"text": "Unlike the trumpet, which has a cylindrical bore up to the bell section, the tubing of the cornet has a mostly conical bore, starting very narrow at the mouthpiece and gradually widening towards the bell. Cornets following the 1913 patent of E. A. Couturier can have a continuously conical bore. This shape is primarily responsible for the instrument's characteristic warm, mellow tone, which can be distinguished from the more penetrating sound of the trumpet. The conical bore of the cornet also makes it more agile than the trumpet when playing fast passages, but correct pitching is often less assured. The cornet is often preferred for young beginners as it is easier to hold, with its centre of gravity much closer to the player.",
"title": "Relationship to trumpet"
},
{
"paragraph_id": 7,
"text": "The cornet mouthpiece has a shorter and narrower shank than that of a trumpet, so it can fit the cornet's smaller mouthpiece receiver. The cup size is often deeper than that of a trumpet mouthpiece.",
"title": "Relationship to trumpet"
},
{
"paragraph_id": 8,
"text": "One variety is the short-model traditional cornet, also known as a \"Shepherd's Crook\" shaped model. These are most often large-bore instruments with a rich mellow sound. There is also a long-model, or \"American-wrap\" cornet, often with a smaller bore and a brighter sound, which is produced in a variety of different tubing wraps and is closer to a trumpet in appearance. The Shepherd's Crook model is preferred by cornet traditionalists. The long-model cornet is generally used in concert bands in the United States and has found little following in British-style brass and concert bands.",
"title": "Relationship to trumpet"
},
{
"paragraph_id": 9,
"text": "A third, and relatively rare variety—distinct from the \"American-wrap\" cornet—is the \"long cornet\", which was produced in the mid-20th century by C. G. Conn and F. E. Olds and is visually nearly indistinguishable from a trumpet, except that it has a receiver fashioned to accept cornet mouthpieces.",
"title": "Relationship to trumpet"
},
{
"paragraph_id": 10,
"text": "The echo cornet has been called an obsolete variant. It has a mute chamber (or echo chamber) mounted to the side, acting as a second bell when the fourth valve is pressed. The second bell has a sound similar to that of a Harmon mute and is typically used to play echo phrases, whereupon the player imitates the sound from the primary bell using the echo chamber.",
"title": "Relationship to trumpet"
},
{
"paragraph_id": 11,
"text": "Like the trumpet and all other modern brass wind instruments, the cornet makes a sound when the player vibrates (\"buzzes\") the lips in the mouthpiece, creating a vibrating column of air in the tubing. The frequency of the air column's vibration can be modified by changing the lip tension and aperture, or embouchure, and by altering the tongue position to change the shape of the oral cavity, thereby increasing or decreasing the speed of the airstream. In addition, the column of air can be lengthened by engaging one or more valves, thus lowering the pitch. Double and triple tonguing are also possible.",
"title": "Playing technique"
},
{
"paragraph_id": 12,
"text": "Without valves, the player could produce only a harmonic series of notes, like those played by the bugle and other \"natural\" brass instruments. These notes are far apart for most of the instrument's range, making diatonic and chromatic playing impossible, except in the extreme high register. The valves change the length of the vibrating column and provide the cornet with the ability to play chromatically.",
"title": "Playing technique"
},
{
"paragraph_id": 13,
"text": "British brass bands consist only of brass instruments and a percussion section. The cornet is the leading melodic instrument in this ensemble; trumpets are never used. The ensemble consists of about thirty musicians, including nine B♭ cornets and one E♭ cornet (soprano cornet). In the UK, companies such as Besson and Boosey & Hawkes specialized in instruments for brass bands. In America, 19th-century manufacturers such as Graves and Company, Hall and Quinby, E.G. Wright, and the Boston Musical Instrument Manufactury made instruments for this ensemble.",
"title": "Ensembles with cornets"
},
{
"paragraph_id": 14,
"text": "The cornet features in the British-style concert band, and early American concert band pieces, particularly those written or transcribed before 1960, often feature distinct, separate parts for trumpets and cornets. Cornet parts are rarely included in later American pieces, however, and they are replaced in modern American bands by the trumpet. This slight difference in instrumentation derives from the British concert band's heritage in military bands, where the highest brass instrument is always the cornet. There are usually four to six B♭ cornets present in a British concert band, but no E♭ instrument, as this role is taken by the E♭ clarinet.",
"title": "Ensembles with cornets"
},
{
"paragraph_id": 15,
"text": "Fanfareorkesten (\"fanfare orchestras\"), found in only the Netherlands, Belgium, northern France, and Lithuania, use the complete saxhorn family of instruments. The standard instrumentation includes both the cornet and the trumpet; however, in recent decades, the cornet has largely been replaced by the trumpet.",
"title": "Ensembles with cornets"
},
{
"paragraph_id": 16,
"text": "In old-style jazz bands, the cornet was preferred to the trumpet, but from the swing era onwards, it has been largely replaced by the louder, more piercing trumpet. Likewise, the cornet has been largely phased out of big bands by a growing taste for louder and more aggressive instruments, especially since the advent of bebop in the post-World War II era.",
"title": "Ensembles with cornets"
},
{
"paragraph_id": 17,
"text": "Jazz pioneer Buddy Bolden played the cornet, and Louis Armstrong started off on the instrument, but his switch to the trumpet is often credited with the beginning of the trumpet's dominance in jazz. Cornetists such as Bubber Miley and Rex Stewart contributed substantially to the Duke Ellington Orchestra's early sound. Other influential jazz cornetists include Freddie Keppard, King Oliver, Bix Beiderbecke, Ruby Braff, Bobby Hackett, and Nat Adderley. Notable performances on cornet by players generally associated with the trumpet include Freddie Hubbard's on Empyrean Isles, by Herbie Hancock, and Don Cherry's on The Shape of Jazz to Come, by Ornette Coleman. The band Tuba Skinny is led by cornetist Shaye Cohn.",
"title": "Ensembles with cornets"
},
{
"paragraph_id": 18,
"text": "Soon after its invention, the cornet was introduced into the symphony orchestra, supplementing the trumpets. The use of valves meant they could play a full chromatic scale in contrast with trumpets, which were still restricted to the harmonic series. In addition, their tone was found to unify the horn and trumpet sections. Hector Berlioz was the first significant composer to use them in these ways, and his orchestral works often use pairs of both trumpets and cornets, the latter playing more of the melodic lines. In his Symphonie fantastique (1830), he added a counter-melody for a solo cornet in the second movement (Un Bal).",
"title": "Ensembles with cornets"
},
{
"paragraph_id": 19,
"text": "Cornets continued to be used, particularly in French compositions, well after the valve trumpet was common. They blended well with other instruments and were held to be better suited to certain types of melody. Tchaikovsky used them effectively this way in his Capriccio Italien (1880).",
"title": "Ensembles with cornets"
},
{
"paragraph_id": 20,
"text": "From the early 20th century, the cornet and trumpet combination was still favored by some composers, including Edward Elgar and Igor Stravinsky, but tended to be used for occasions when the composer wanted the specific mellower and more agile sound. The sounds of the cornet and trumpet have grown closer together over time, and the former is now rarely used as an ensemble instrument: in the first version of his ballet Petrushka (1911), Stravinsky gives a celebrated solo to the cornet; in the 1946 revision, he removed cornets from the orchestration and instead assigned the solo to the trumpet.",
"title": "Ensembles with cornets"
}
] | The cornet is a brass instrument similar to the trumpet but distinguished from it by its conical bore, more compact shape, and mellower tone quality. The most common cornet is a transposing instrument in B♭. There is also a soprano cornet in E♭ and cornets in A and C. All are unrelated to the Renaissance and early Baroque cornett. | 2001-11-14T07:40:32Z | 2023-12-21T11:18:14Z | [
"Template:Short description",
"Template:Infobox Instrument",
"Template:Brass",
"Template:IPAc-en",
"Template:Music",
"Template:About",
"Template:Use dmy dates",
"Template:More citations needed",
"Template:Cite web",
"Template:Authority control",
"Template:Distinguish",
"Template:Lang",
"Template:Cite news",
"Template:Brass instruments",
"Template:Original research",
"Template:Reflist",
"Template:Cite book",
"Template:Trumpets"
] | https://en.wikipedia.org/wiki/Cornet |
7,102 | CAMP | CAMP, cAMP or camP may stand for: | [
{
"paragraph_id": 0,
"text": "CAMP, cAMP or camP may stand for:",
"title": ""
}
] | CAMP, cAMP or camP may stand for: CAMP:
Cathelicidin, or Cathelicidin antimicrobial peptide
Campaign Against Marijuana Planting
CAMP, part of the Prague Institute of Planning and Development
Central Atlantic magmatic province
CAMP (company), an Italian manufacturer of climbing equipment
CAMP (studio), a media studio in Mumbai cAMP:
Cyclic adenosine monophosphate (cAMP)
(+)-cis-2-Aminomethylcyclopropane carboxylic acid, a GABAA-ρ agonist
camP:
2,5-diketocamphane 1,2-monooxygenase, an enzyme | 2023-03-08T17:28:35Z | [
"Template:Wiktionary",
"Template:Srt",
"Template:Disambiguation"
] | https://en.wikipedia.org/wiki/CAMP |
|
7,103 | CGMP | CGMP is an initialism. It can refer to: | [
{
"paragraph_id": 0,
"text": "CGMP is an initialism. It can refer to:",
"title": ""
}
] | CGMP is an initialism. It can refer to: cyclic guanosine monophosphate (cGMP)
current good manufacturing practice (cGMP)
CGMP, Cisco Group Management Protocol, the Cisco version of Internet Group Management Protocol snooping
caseinoglycomacropeptide (CGMP) or caseinomacropeptide; see K-casein
Competitive guaranteed maximum price | 2020-05-14T04:27:46Z | [
"Template:Disambiguation"
] | https://en.wikipedia.org/wiki/CGMP |
|
7,104 | Cotton Mather | Cotton Mather FRS (/ˈmæðər/; February 12, 1663 – February 13, 1728) was a Puritan clergyman and author in colonial New England, who wrote extensively on theological, historical, and scientific subjects. After being educated at Harvard College, he joined his father Increase as minister of the Congregationalist Old North Meeting House in Boston, Massachusetts, where he preached for the rest of his life. He has been referred to as the "first American Evangelical".
A major intellectual and public figure in English-speaking colonial America, Cotton Mather helped lead the successful revolt of 1689 against Sir Edmund Andros, the governor imposed on New England by King James II. Mather's subsequent involvement in the Salem witch trials of 1692–1693, which he defended in the book Wonders of the Invisible World (1693), attracted intense controversy in his own day and has negatively affected his historical reputation. As a historian of colonial New England, Mather is noted for his Magnalia Christi Americana (1702).
Personally and intellectually committed to the waning social and religious orders in New England, Cotton Mather unsuccessfully sought the presidency of Harvard College. After 1702, Cotton Mather clashed with Joseph Dudley, the governor of the Province of Massachusetts Bay, whom Mather attempted unsuccessfully to drive out of power. Mather championed the new Yale College as an intellectual bulwark of Puritanism in New England. He corresponded extensively with European intellectuals and received an honorary Doctor of Divinity degree from the University of Glasgow in 1710.
A promoter of the new experimental science in America, Cotton Mather carried out original research on plant hybridization. He also researched the variolation method of inoculation as a means of preventing smallpox contagion, which he learned about from an African-American slave that he owned, Onesimus. He dispatched many reports on scientific matters to the Royal Society of London, which elected him as a fellow in 1713. Mather's promotion of inoculation against smallpox caused violent controversy in Boston during the outbreak of 1721. Scientist and US founding father Benjamin Franklin, who as a young Bostonian had opposed the old Puritan order represented by Mather and participated in the anti-inoculation campaign, later described Mather's book Bonifacius, or Essays to Do Good (1710) as a major influence on his life.
Cotton Mather was born in 1663 in the city of Boston, the capital of the Massachusetts Bay Colony, to the Rev. Increase Mather and his wife Maria née Cotton. His grandfathers were Richard Mather and John Cotton, both of them prominent Puritan ministers who had played major roles in the establishment and growth of the Massachusetts colony. Richard Mather was a graduate of the University of Oxford and John Cotton a graduate of the University of Cambridge. Increase Mather was a graduate of Harvard College and the Trinity College Dublin, and served as the minister of Boston's original North Church (not to be confused with the Anglican Old North Church of Paul Revere fame). This was one of the two principal Congregationalist churches in the city, the other being the First Church established by John Winthrop. Cotton Mather was therefore born into one of the most influential and intellectually distinguished families in colonial New England and seemed destined to follow his father and grandfathers into the Puritan clergy.
Cotton entered Harvard College, in the neighboring town of Cambridge, in 1674. Aged only eleven and a half, he is the youngest student ever admitted to that institution. At around this time, Cotton began to be afflicted by stuttering, a speech disorder that he would struggle to overcome throughout the rest of his life. Bullied by the older students and fearing that his stutter would make him unsuitable as a preacher, Cotton withdrew temporarily from the College, continuing his education at home. He also took an interest in medicine and considered the possibility of pursuing a career as a physician rather than as a religious minister. Cotton eventually returned to Harvard and received his Bachelor of Arts degree in 1678, followed by a Master of Arts degree in 1681, the same year his father became Harvard President. At Harvard, Cotton studied Hebrew and the sciences.
After completing his education, Cotton joined his father's church as assistant pastor. In 1685, Cotton was ordained and assumed full responsibilities as co-pastor of the church. Father and son continued to share responsibility for the care of the congregation until the death of Increase in 1723. Cotton would die less than five years after his father, and was therefore throughout most of his career in the shadow of the respected and formidable Increase.
When Increase Mather became president of Harvard in 1692, he exercised considerable influence on the politics of the Massachusetts colony. Despite Cotton's efforts, he never became quite as influential as his father. One of the most public displays of their strained relationship emerged during the Salem witch trials, which Increase Mather reportedly did not support. Cotton did surpass his father's output as a writer, producing nearly 400 works.
Cotton Mather married Abigail Phillips, daughter of Colonel John Phillips of Charlestown, on May 4, 1686, when Cotton was twenty-three and Abigail was not quite sixteen years old. They had eight children.
Abigail died of smallpox in 1702, having previously suffered a miscarriage. He married widow Elizabeth Hubbard in 1703. Like his first marriage, he was happily married to a very religious and emotionally stable woman. They had six children. Elizabeth, the couple's newborn twins, and a two-year-old daughter, Jerusha, all succumbed to a measles epidemic in 1713.
On July 5, 1715, Mather married widow Lydia Lee George. Her daughter Katherine, wife of Nathan Howell, became a widow shortly after Lydia married Mather and she came to live with the newly married couple. Also living in the Mather household at that time were Mather's children Abigal (21), Hannah (18), Elizabeth (11), and Samuel (9). Initially, Mather wrote in his journal how lovely he found his wife and how much he enjoyed their discussions about scripture. Within a few years of their marriage, Lydia was subject to rages which left Mather humiliated and depressed. They clashed over Mather's piety and his mishandling of Nathan Howell's estate. He began to call her deranged. She left him for ten days, returning when she learned that Mather's son Increase was lost at sea. Lydia nursed him through illnesses, the last of which lasted five weeks and ended with his death on February 15, 1728. Of the children that Mather had with Abigail and Elizabeth, the only children to survive him were Hannah and Samuel. He did not have any children with Lydia.
On May 14, 1686, ten days after Cotton Mather's marriage to Abigail Phillips, Edward Randolph disembarked in Boston bearing letters patent from King James II of England that revoked the Charter of the Massachusetts Bay Company and commissioned Randolph to reorganize the colonial government. James's intention was to curb Massachusetts's religious separatism by incorporating the colony it into a larger Dominion of New England, without an elected legislature and under a governor who would serve at the pleasure of the Crown. Later that year, the King appointed Sir Edmund Andros as governor of that new Dominion. This was a direct attack upon the Puritan religious and social orders that the Mathers represented, as well as upon the local autonomy of Massachusetts. The colonists were particularly outraged when Andros declared that all grants of land made in the name of the old Massachusetts Bay Company were invalid, forcing them to apply and pay for new royal patents on land that they already occupied or face eviction. In April 1687, Increase Mather sailed to London, where he remained for the next four years, pleading with the Court for what he regarded as the interests of the Massachusetts colony.
The birth of a male heir to King James in June 1688, which could have cemented a Roman Catholic dynasty in the English throne, triggered the so-called Glorious Revolution in which Parliament deposed James and gave the Crown jointly to his Protestant daughter Mary and her husband, the Dutch Prince William of Orange. News of the events in London greatly emboldened the opposition in Boston to Governor Andros, finally precipitating the 1689 Boston revolt. Cotton Mather, then aged twenty-six, was one of the Puritan ministers who guided resistance in Boston to Andros's regime. Early in 1689, Randolph had a warrant issued for Cotton Mather's arrest on a charge of "scandalous libel", but the warrant was overruled by Wait Winthrop.
According to some sources, Cotton Mather escaped a second attempted arrest on April 18, 1689, the same day that the people of Boston took up arms against Andros. The young Mather may have authored, in whole or in part, the "Declaration of the Gentlemen, Merchants, and Inhabitants of Boston and the Country Adjacent", which justified that uprising by a list of grievances that the declaration attributed to the deposed officials. The authorship of that document is uncertain: it was not signed by Mather or any other clergymen, and Puritans frowned upon the clergy being seen to play too direct and personal a hand in political affairs. That day, Mather probably read the Declaration to a crowd gathered in front of the Boston Town House.
In July, Andros, Randolph, Joseph Dudley, and other officials who had been deposed and arrested in the Boston revolt were summoned to London to answer the complaints against them. The administration of Massachusetts was temporarily assumed by Simon Bradstreet, whose rule proved weak and contentious. In 1691, the government of King William and Queen Mary issued a new Massachusetts Charter. This charter united the Massachusetts Bay Colony with Plymouth Colony into the new Province of Massachusetts Bay. Rather than restoring the old Puritan rule, the Charter of 1691 mandated religious toleration for all non-Catholics and established a government led by a Crown-appointed governor. The first governor under the new charter was Sir William Phips, who was a member of the Mathers' church in Boston.
Cotton Mather's reputation, in his own day as well as in the historiography and popular culture of subsequent generations, has been very adversely affected by his association with the events surrounding the Salem witch trials of 1692–1693. As a consequence of those trials, nineteen people were executed by hanging for practicing witchcraft and one was pressed to death for refusing to enter a plea before the court. Although Mather had no official role in the legal proceedings, he wrote the book Wonders of the Invisible World, which appeared in 1693 with the endorsement of William Stoughton, the Lieutenant Governor of Massachusetts and chief judge of the Salem witch trials. Mather's book constitutes the most detailed written defense of the conduct of those trials. Mather's role in drumming up and sustaining the witch hysteria behind those proceedings was denounced by Robert Calef in his book More Wonders of the Invisible World, published in 1700. In the 19th century, Nathaniel Hawthorne called Mather "the chief agent of the mischief" at Salem.
More recently historians have tended to downplay Mather's role in the events at Salem. According to Jan Stievermann, of the Heidelberg Center for American Studies,
unlike some other ministers [Cotton Mather] never called for an end to the trials, and he afterwards wrote New England's official defense of the court's proceedings, the infamous Wonders of the Invisible World (1693). Still, there is now a general agreement that his beliefs were very typical of the period, that he acted as a moderating force in the context of the trials, and that he never directly participated in the proceedings. He advised the judges against using spectral evidence and offered recommendations to proceed with caution lest innocent people come to harm. In the end, Mather's role in the witchcraft episode was thus ambivalent and conflicted.
In 1689, Mather published Memorable Providences, Relating to Witchcrafts and Possessions, based on his study of events surrounding the affliction of the children of a Boston mason named John Goodwin. Those afflictions had begun after Goodwin's eldest daughter confronted a washerwoman whom she suspected of stealing some of the family's linen. In response to this, the washerwoman's mother, Ann Glover, verbally insulted the Goodwin girl, who soon began to suffer from hysterical fits that later began to afflict also the three other Goodwin children. Glover was an Irish Catholic widow who could understand English but spoke only Gaelic. Interrogated by the magistrates, she admitted that she tormented her enemies by stroking certain images or dolls with her finger wetted with spittle. After she was sentenced to death for witchcraft, Mather visited her in prison and interrogated her through an interpreter.
Before her execution, Glover warned that her death would not bring relief to the Goodwin children, as she was not the one responsible for their torments. Indeed, after Glover was hanged the children's afflictions increased. Mather documented these events and attempted to de-possess the "Haunted Children" by prayer and fasting. He also took in the eldest Goodwin child, Martha, into his own home, where she lived for several weeks. Eventually, the afflictions ceased and Martha was admitted into Mather's church.
The publication of Mather's Memorable Providences attracted attention on both sides of the Atlantic, including from the eminent English Puritan Richard Baxter. In his book, Mather argued that since there are witches and devils, there are "immortal souls". He also claimed that witches appear spectrally as themselves. He opposed any natural explanations for the fits, believed that people who confessed to using witchcraft were sane, and warned against all magical practices due to their diabolical connections.
Mather's contemporary Robert Calef would later accuse Mather of laying the groundwork, with his Memorable Providences, for the witchcraft hysteria that gripped Salem three years later:
Mr Cotton Mather, was the most active and forward of any Minister in the Country in those matters, taking home one of the Children, and managing such Intreagues with that Child, and after printing such an account of the whole, in his Memorable Providences, as conduced much to the kindling of those Flames, that in Sir William's time threatened the devouring of this Country.
Similar views, on Mather's responsibility for the climate of hysteria over witchcraft that led to the Salem trials, were repeated by later commentators, such as the politician and historian Charles W. Upham in the 19th century.
When the accusations of witchcraft arose in Salem Village in 1692, Cotton Mather was incapacitated by a serious illness, which he attributed to overwork. He suggested that the afflicted girls be separated and offered to take six of them into his home, as he had done previously with Martha Goodwin. That offer was not accepted.
In May of that year, Sir William Phips, governor of the newly chartered Province of Massachusetts Bay, appointed a special "Court of Oyer and Terminer" to try the cases of witchcraft in Salem. The chief judge of that court was Phips's lieutenant governor, William Stoughton. Stoughton had close ties to the Mathers and had been recommended as Governor Phips's lieutenant by Increase Mather.
Another of the judges in the new court, John Richards, requested that Cotton Mather accompany him to Salem, but Mather refused due to his ill health. Instead, Mather wrote a long letter to Richards in which he gave his advice on the impending trials. In that letter, Mather states that witches guilty of the most grievous crimes should be executed, but that witches convicted of lesser offenses deserve more lenient punishment. He also wrote that the identification and conviction of all witches should be undertaken with the greatest caution and warned against the use of spectral evidence (i.e., testimony that the specter of the accused had tormented a victim) on the grounds that devils could assume the form of innocent and even virtuous people. Under English law, spectral evidence had been admissible in witchcraft trials for a century before the events in Salem, and it would remain admissible until 1712. There was, however, debate among experts as to how much weight should be given to such testimonies.
On June 10, 1692, Bridget Bishop, the thrice-married owner of an unlicensed tavern, was hanged after being convicted and sentenced by the Court of Oyer and Terminer, based largely on spectral evidence. A group of twelve Puritan ministers issued a statement, drawn up by Cotton Mather and presented to Governor Phips and his council a few days later, entitled The Return of Several Ministers. In that document, Mather criticized the court's reliance on spectral evidence and recommended that it adopt a more cautious procedure. However, he ended the document with a statement defending the continued prosecution of witchcraft according to the "Direction given by the Laws of God, and the wholesome Statues of the English Nation". Robert Calef would later criticize Mather's intervention in The Return of Several Ministers as "perfectly ambidexter, giving a great or greater encouragement to proceed in those dark methods, than cautions against them."
On August 4, Cotton Mather preached a sermon before his North Church congregation on the text of Revelation 12:12: "Woe to the Inhabitants of the Earth, and of the Sea; for the Devil is come down unto you, having great Wrath; because he knoweth, that he hath but a short time." In the sermon, Mather claimed that the witches "have associated themselves to do no less a thing than to destroy the Kingdom of our Lord Jesus Christ, in these parts of the World." Although he did not intervene in any of the trials, there are some testimonies that Mather was present at the executions that were carried out in Salem on August 19. According to his Mather's contemporary critic Robert Calef, the crowd was disturbed by George Burroughs's eloquent declarations of innocence from the scaffold and by his recitation of the Lord's Prayer, of which witches were commonly believed to be incapable. Calef claimed that, after Burroughs had been hanged,
Mr. Cotton Mather, being mounted upon a Horse, addressed himself to the People, partly to declare that [Burroughs] was no ordained Minister, partly to possess the People of his guilt, saying that the devil often had been transformed into the Angel of Light. And this did somewhat appease the People, and the Executions went on.
As public discontent with the witch trials grew in the summer of 1692, threatening civil unrest, the conservative Cotton Mather felt compelled to defend the responsible authorities. On September 2, 1692, after eleven people had been executed as witches, Cotton Mather wrote a letter to Judge Stoughton congratulating him on "extinguishing of as wonderful a piece of devilism as has been seen in the world". As the opposition to the witch trials was bringing them to a halt, Mather wrote Wonders of the Invisible World, a defense of the trials that carried Stoughton's official approval.
Mather's Wonders did little to appease the growing clamor against the Salem witch trials. At around the same time that the book began to circulate in manuscript form, Governor Phips decided to restrict greatly the use of spectral evidence, thus raising a great barrier against further convictions. The Court of Oyer and Terminer was dismissed on October 29. A new court convened on January 1693 to hear the remaining cases, almost all of which ended in acquittal. In May, Governor Phips issued a general pardon, thus bringing the witch trials to an end.
The last major events in Mather's involvement with witchcraft were his interactions with Mercy Short in December 1692 and Margaret Rule in September 1693. Mather appears to have remained convinced that genuine witches had been executed in Salem and he never publicly expressed regrets over his role in those events. Robert Calef, an otherwise obscure Boston merchant, published More Wonders of the Invisible World in 1700, bitterly attacking Cotton Mather over his role in the events of 1692. In the words of 20th-century historian Samuel Eliot Morison, "Robert Calef tied a tin can to Cotton Mather which has rattled and banged through the pages of superficial and popular historians". Intellectual historian Reiner Smolinski, an expert on the writings of Cotton Mather, found it "deplorable that Mather's reputation is still overshadowed by the specter of Salem witchcraft."
Cotton Mather was an extremely prolific writer, producing 388 different books and pamphlets during his lifetime. His most widely distributed work was Magnalia Christi Americana (which may be translated as "The Glorious Works of Christ in America"), subtitled "The ecclesiastical history of New England, from its first planting in the year 1620 unto the year of Our Lord 1698. In seven books." Despite the Latin title, the work is written in English. Mather began working on it towards the end of 1693 and it was finally published in London in 1702. The work incorporates information that Mather put together from a variety of sources, such as letters, diaries, sermons, Harvard College records, personal conversations, and the manuscript histories composed by William Hubbard and William Bradford. The Magnalia includes about fifty biographies of eminent New Englanders (ranging from John Eliot, the first Puritan missionary to the Native Americans, to Sir William Phips, the incumbent governor of Massachusetts at the time that Mather began writing), plus dozens of brief biographical sketches, including those of Hannah Duston and Hannah Swarton.
According to Kenneth Silverman, an expert on early American literature and Cotton Mather's biographer,
If the epic ambitions of Magnalia, its attempt to put American on the cultural map, recall such later American works as Moby-Dick (to which it has been compared), its effort to rejoin provincial America to the mainstream of English culture recalls rather The Waste Land. Genuinely Anglo-American in outlook, the book projects a New England which is ultimately an enlarged version of Cotton Mather himself, a pious citizen of "The Metropolis of the whole English America".
Silverman argues that, although Mather glorifies New England's Puritan past, in the Magnalia he also attempts to transcend the religious separatism of the old Puritan settlers, reflecting Mather's more ecumenical and cosmopolitan embrace of a Transatlantic Protestant Christianity that included, in addition to Mather's own Congregationalists, also Presbyterians, Baptists, and low church Anglicans.
In 1693 Mather also began work on a grand intellectual project that he titled Biblia Americana, which sought to provide a commentary and interpretation of the Christian Bible in light of "all of the Learning in the World". Mather, who continued to work on it for many years, sought to incorporate into his reading of Scripture the new scientific knowledge and theories, including geography, heliocentrism, atomism, and Newtonianism. According to Silverman, the project "looks forward to Mather's becoming probably the most influential spokesman in New England for a rationalized, scientized Christianity." Mather could not find a publisher for the Biblia Americana, which remained in manuscript form during his lifetime. It is currently being edited in ten volumes, published by Mohr Siebeck under the direction of Reiner Smolinski and Jan Stievermann. As of 2023, seven of the ten volumes have appeared in print.
In Massachusetts at the start of the 18th century, Joseph Dudley was a highly controversial figure, as he had participated actively in the government of Sir Edmund Andros in 1686–1689. Dudley was among those arrested in the revolt of 1689, and was later called to London to answer the charges against him brought by a committee of the colonists. However, Dudley was able to pursue a successful political career in Britain. Upon the death in 1701 of acting governor William Stoughton, Dudley began enlisting support in London to procure appointment as the new governor of Massachusetts.
Although the Mathers (to whom he was related by marriage), continued to resent Dudley's role in the Andros administration, they eventually came around to the view that Dudley would now be preferable as governor to the available alternatives, at a time when the English Parliament was threatening to repeal the Massachusetts Charter. With the Mathers' support, Dudley was appointed governor by the Crown and returned to Boston in 1702. Contrary to the promises that he had made to the Mathers, Governor Dudley proved a divisive and high-handed executive, reserving his patronage for a small circle composed of transatlantic merchants, Anglicans, and religious liberals such as Thomas Brattle, Benjamin Colman, and John Leverett.
In the context of Queen Anne's War (1702–1713), Cotton Mather preached and published against Governor Dudley, whom Mather accused of corruption and misgovernment. Mather sought unsuccessfully to have Dudley replaced by Sir Charles Hobby. Outmaneuvered by Dudley, this political rivalry left Mather increasingly isolated at a time when Massachusetts society was steadily moving away from the Puritan tradition that Mather represented.
Cotton Mather was a fellow of Harvard College from 1690 to 1702, and at various times sat on its Board of Overseers. His father Increase had succeeded John Rogers as president of Harvard in 1684, first as acting president (1684–1686), later with the title of "rector" (1686–1692, during much of which period he was away from Massachusetts, pleading the Puritans' case before the Royal Court in London), and finally with the full title of president (1692–1701). Increase was unwilling to move permanently to the Harvard campus in Cambridge, Massachusetts, since his congregation in Boston was much larger than the Harvard student body, which at the time counted only a few dozen. Instructed by a committee of the Massachusetts General Assembly that the president of Harvard had to reside in Cambridge and preach to the students in person, Increase resigned in 1701 and was replaced by the Rev. Samuel Willard as acting president.
Cotton Mather sought the presidency of Harvard, but in 1708 the fellows instead appointed a layman, John Leverett, who had the support of Governor Dudley. The Mathers disapproved of the increasing independence and liberalism of the Harvard faculty, which they regarded as laxity. Cotton Mather came to see the Collegiate School, which had moved in 1716 from Saybrook to New Haven, Connecticut, as a better vehicle for preserving the Puritan orthodoxy in New England. In 1718, Cotton convinced Boston-born British businessman Elihu Yale to make a charitable gift sufficient to ensure the school's survival. It was also Mather who suggested that the school change its name to Yale College after it accepted that donation.
Cotton Mather sought the presidency of Harvard again after Leverett's death in 1724, but the fellows offered the position to the Rev. Joseph Sewall (son of Judge Samuel Sewall, who had repented publicly for his role in the Salem witch trials). When Sewall turned it down, Mather once again hoped that he might get the appointment. Instead, the fellows offered it to one of its own number, the Rev. Benjamin Coleman, an old rival of Mather. When Coleman refused it, the presidency went finally to the Rev. Benjamin Wadsworth.
The practice of smallpox inoculation (as distinguished from to the later practice of vaccination) was developed possibly in 8th-century India or 10th-century China and by the 17th-century had reached Turkey. It was also practiced in western Africa, but it is not known when it started there. Inoculation or, rather, variolation, involved infecting a person via a cut in the skin with exudate from a patient with a relatively mild case of smallpox (variola), to bring about a manageable and recoverable infection that would provide later immunity. By the beginning of the 18th century, the Royal Society in England was discussing the practice of inoculation, and the smallpox epidemic in 1713 spurred further interest. It was not until 1721, however, that England recorded its first case of inoculation.
Smallpox was a serious threat in colonial America, most devastating to Native Americans, but also to Anglo-American settlers. New England suffered smallpox epidemics in 1677, 1689–90, and 1702. It was highly contagious, and mortality could reach as high as 30 percent. Boston had been plagued by smallpox outbreaks in 1690 and 1702. During this era, public authorities in Massachusetts dealt with the threat primarily by means of quarantine. Incoming ships were quarantined in Boston Harbor, and any smallpox patients in town were held under guard or in a "pesthouse".
In 1716, Onesimus, one of Mather's slaves, explained to Mather how he had been inoculated as a child in Africa. Mather was fascinated by the idea. By July 1716, he had read an endorsement of inoculation by Dr Emanuel Timonius of Constantinople in the Philosophical Transactions. Mather then declared, in a letter to Dr John Woodward of Gresham College in London, that he planned to press Boston's doctors to adopt the practice of inoculation should smallpox reach the colony again.
By 1721, a whole generation of young Bostonians was vulnerable and memories of the last epidemic's horrors had by and large disappeared. Smallpox returned on April 22 of that year, when HMS Seahorse arrived from the West Indies carrying smallpox on board. Despite attempts to protect the town through quarantine, nine known cases of smallpox appeared in Boston by May 27, and by mid-June, the disease was spreading at an alarming rate. As a new wave of smallpox hit the area and continued to spread, many residents fled to outlying rural settlements. The combination of exodus, quarantine, and outside traders' fears disrupted business in the capital of the Bay Colony for weeks. Guards were stationed at the House of Representatives to keep Bostonians from entering without special permission. The death toll reached 101 in September, and the Selectmen, powerless to stop it, "severely limited the length of time funeral bells could toll." As one response, legislators delegated a thousand pounds from the treasury to help the people who, under these conditions, could no longer support their families.
On June 6, 1721, Mather sent an abstract of reports on inoculation by Timonius and Jacobus Pylarinus to local physicians, urging them to consult about the matter. He received no response. Next, Mather pleaded his case to Dr. Zabdiel Boylston, who tried the procedure on his youngest son and two slaves—one grown and one a boy. All recovered in about a week. Boylston inoculated seven more people by mid-July. The epidemic peaked in October 1721, with 411 deaths; by February 26, 1722, Boston was again free from smallpox. The total number of cases since April 1721 came to 5,889, with 844 deaths—more than three-quarters of all the deaths in Boston during 1721. Meanwhile, Boylston had inoculated 287 people, with six resulting deaths.
Boylston and Mather's inoculation crusade "raised a horrid Clamour" among the people of Boston. Both Boylston and Mather were "Object[s] of their Fury; their furious Obloquies and Invectives", which Mather acknowledges in his diary. Boston's Selectmen, consulting a doctor who claimed that the practice caused many deaths and only spread the infection, forbade Boylston from performing it again.
The New-England Courant published writers who opposed the practice. The editorial stance was that the Boston populace feared that inoculation spread, rather than prevented, the disease; however, some historians, notably H. W. Brands, have argued that this position was a result of the contrarian positions of editor-in-chief James Franklin (a brother of Benjamin Franklin). Public discourse ranged in tone from organized arguments by John Williams from Boston, who posted that "several arguments proving that inoculating the smallpox is not contained in the law of Physick, either natural or divine, and therefore unlawful", to those put forth in a pamphlet by Dr. William Douglass of Boston, entitled The Abuses and Scandals of Some Late Pamphlets in Favour of Inoculation of the Small Pox (1721), on the qualifications of inoculation's proponents. (Douglass was exceptional at the time for holding a medical degree from Europe.) At the extreme, in November 1721, someone hurled a lighted grenade into Mather's home.
Several opponents of smallpox inoculation, among them John Williams, stated that there were only two laws of physick (medicine): sympathy and antipathy. In his estimation, inoculation was neither a sympathy toward a wound or a disease, or an antipathy toward one, but the creation of one. For this reason, its practice violated the natural laws of medicine, transforming health care practitioners into those who harm rather than heal.
As with most colonists, Williams' Puritan beliefs were enmeshed in every aspect of his life, and he used the Bible to state his case. He quoted Matthew 9:12, when Jesus said: "It is not the healthy who need a doctor, but the sick." William Douglass proposed a more secular argument against inoculation, stressing the importance of reason over passion and urging the public to be pragmatic in their choices. In addition, he demanded that ministers leave the practice of medicine to physicians, and not meddle in areas where they lacked expertise. According to Douglass, smallpox inoculation was "a medical experiment of consequence," one not to be undertaken lightly. He believed that not all learned individuals were qualified to doctor others, and while ministers took on several roles in the early years of the colony, including that of caring for the sick, they were now expected to stay out of state and civil affairs. Douglass felt that inoculation caused more deaths than it prevented. The only reason Mather had had success in it, he said, was because Mather had used it on children, who are naturally more resilient. Douglass vowed to always speak out against "the wickedness of spreading infection". Speak out he did: "The battle between these two prestigious adversaries [Douglass and Mather] lasted far longer than the epidemic itself, and the literature accompanying the controversy was both vast and venomous."
Generally, Puritan pastors favored the inoculation experiments. Increase Mather, Cotton's father, was joined by prominent pastors Benjamin Colman and William Cooper in openly propagating the use of inoculations. "One of the classic assumptions of the Puritan mind was that the will of God was to be discerned in nature as well as in revelation." Nevertheless, Williams questioned whether the smallpox "is not one of the strange works of God; and whether inoculation of it be not a fighting with the most High." He also asked his readers if the smallpox epidemic may have been given to them by God as "punishment for sin," and warned that attempting to shield themselves from God's fury (via inoculation), would only serve to "provoke him more".
Puritans found meaning in affliction, and they did not yet know why God was showing them disfavor through smallpox. Not to address their errant ways before attempting a cure could set them back in their "errand". Many Puritans believed that creating a wound and inserting poison was doing violence and therefore was antithetical to the healing art. They grappled with adhering to the Ten Commandments, with being proper church members and good caring neighbors. The apparent contradiction between harming or murdering a neighbor through inoculation and the Sixth Commandment—"thou shalt not kill"—seemed insoluble and hence stood as one of the main objections against the procedure. Williams maintained that because the subject of inoculation could not be found in the Bible, it was not the will of God, and therefore "unlawful." He explained that inoculation violated The Golden Rule, because if one neighbor voluntarily infected another with disease, he was not doing unto others as he would have done to him. With the Bible as the Puritans' source for all decision-making, lack of scriptural evidence concerned many, and Williams vocally scorned Mather for not being able to reference an inoculation edict directly from the Bible.
With the smallpox epidemic catching speed and racking up a staggering death toll, a solution to the crisis was becoming more urgently needed by the day. The use of quarantine and various other efforts, such as balancing the body's humors, did not slow the spread of the disease. As news rolled in from town to town and correspondence arrived from overseas, reports of horrific stories of suffering and loss due to smallpox stirred mass panic among the people. "By circa 1700, smallpox had become among the most devastating of epidemic diseases circulating in the Atlantic world."
Mather strongly challenged the perception that inoculation was against the will of God and argued the procedure was not outside of Puritan principles. He wrote that "whether a Christian may not employ this Medicine (let the matter of it be what it will) and humbly give Thanks to God's good Providence in discovering of it to a miserable World; and humbly look up to His Good Providence (as we do in the use of any other Medicine) It may seem strange, that any wise Christian cannot answer it. And how strangely do Men that call themselves Physicians betray their Anatomy, and their Philosophy, as well as their Divinity in their invectives against this Practice?" The Puritan minister began to embrace the sentiment that smallpox was an inevitability for anyone, both the good and the wicked, yet God had provided them with the means to save themselves. Mather reported that, from his view, "none that have used it ever died of the Small Pox, tho at the same time, it were so malignant, that at least half the People died, that were infected With it in the Common way."
While Mather was experimenting with the procedure, prominent Puritan pastors Benjamin Colman and William Cooper expressed public and theological support for them. The practice of smallpox inoculation was eventually accepted by the general population due to first-hand experiences and personal relationships. Although many were initially wary of the concept, it was because people were able to witness the procedure's consistently positive results, within their own community of ordinary citizens, that it became widely utilized and supported. One important change in the practice after 1721 was regulated quarantine of inoculees.
Although Mather and Boylston were able to demonstrate the efficacy of the practice, the debate over inoculation would continue even beyond the epidemic of 1721–22. After overcoming considerable difficulty and achieving notable success, Boylston traveled to London in 1725, where he published his results and was elected to the Royal Society in 1726, with Mather formally receiving the honor two years prior.
In 1716, Mather used different varieties of maize ("Indian corn") to conduct one of the first recorded experiments on plant hybridization. He described the results in a letter to his friend James Petiver:
First: my Friend planted a Row of Indian corn that was Coloured Red and Blue; the rest of the Field being planted with corn of the yellow, which is the most usual color. To the Windward side, this Red and Blue Row, so infected Three or Four whole Rows, as to communicate the same Colour unto them; and part of ye Fifth and some of ye Sixth. But to the Leeward Side, no less than Seven or Eight Rows, had ye same Colour communicated unto them; and some small Impressions were made on those that were yet further off.
In his Curiosa Americana (1712–1724) collection, Mather also announced that flowering plants reproduce sexually, an observation that later became the basis of the Linnaean system of plant classification. Mather may also have been the first to develop the concept of genetic dominance, which later would underpin Mendelian genetics.
In 1713, the Secretary of the Royal Society of London, naturalist Richard Waller, informed Mather that he had been elected as a fellow of the Society. Mather was the eighth colonial American to join that learned body, with the first having been John Winthrop the Younger in 1662. During the controversies surrounding Mather's smallpox inoculation campaign of 1721, his adversaries questioned that credential on the grounds that Mather's name did not figure in the published lists of the Society's members. At the time, the Society responded that those published lists included only members who had been inducted in person and who were therefore entitled to vote in the Society's yearly elections. In May 1723, Mather's correspondent John Woodward discovered that, although Mather had been duly nominated in 1713, approved by the council, and informed by Waller of his election at that time, due to an oversight the nomination had not in fact been voted upon by the full assembly of fellows or the vote had not been recorded. After Woodward informed the Society of the situation, the members proceeded to elect Mather by a formal vote.
Mather's enthusiasm for experimental science was strongly influenced by his reading of Robert Boyle's work. Mather was a significant popularizer of the new scientific knowledge and promoted Copernican heliocentrism in some of his sermons. He also argued against the spontaneous generation of life and compiled a medical manual titled The Angel of Bethesda that he hoped would assist people who were unable to procure the services of a physician, but which went unpublished in Mather's lifetime. This was the only comprehensive medical work written in colonial English-speaking America. Although much of what Mather included in that manual were folk remedies now regarded as unscientific or superstitious, some of them are still valid, including smallpox inoculation and the use of citrus juice to treat scurvy. Mather also outlined an early form of germ theory and discussed psychogenic diseases, while recommending hygiene, physical exercise, temperate diet, and avoidance of tobacco smoking.
In his later years, Mather also promoted the professionalization of scientific research in America. He presented a Boston tradesman named Grafton Feveryear with the barometer that Feveryear used to make the first quantitative meteorological observations in New England, which he communicated to the Royal Society in 1727. Mather also sponsored Isaac Greenwood, a Harvard graduate and member of Mather's church, who travelled to London and collaborated with the Royal Society's curator of experiments, John Theophilus Desaguliers. Greenwood later became the first Hollis professor of mathematics and natural philosophy at Harvard, and may well have been the first American to practice science professionally.
Cotton Mather's household included both free servants and a number of slaves who performed domestic chores. Surviving records indicate that, over the course of his lifetime, Mather owned at least three, and probably more, slaves. Like the vast majority of Christians at the time, but unlike his political rival Judge Samuel Sewall, Mather was never an abolitionist, although he did publicly denounce what he regarded as the illegal and inhuman aspects of the burgeoning Atlantic slave trade. In his book The Negro Christianized (1706), Mather insisted that slaveholders should treat their black slaves humanely and instruct them in Christianity with a view to promoting their salvation. Mather received black members of his congregation in his home and he paid a schoolteacher to instruct local black people in reading.
Mather consistently held that black Africans were "of one Blood" with the rest of mankind and that blacks and whites would meet as equals in Heaven. After a number of black people carried out arson attacks in Boston in 1723, Mather asked the outraged white Bostonians whether the black population had been "always treated according to the Rules of Humanity? Are they treated as those, that are of one Blood with us, and those who have Immortal Souls in them, and are not mere Beasts of Burden?"
Mather advocated the Christianization of black slaves both on religious grounds and as tending to make them more patient and faithful servants of their masters. In The Negro Christianized, Mather argued against the opinion of Richard Baxter that a Christian could not enslave another baptized Christian. The African slave Onesimus, from whom Mather first learned about smallpox inoculation, had been purchased for him as a gift by his congregation in 1706. Despite his efforts, Mather was unable to convert Onesimus to Christianity and finally manumitted him in 1716.
Throughout his career Mather was also keen to minister to convicted pirates. He produced a number of pamphlets and sermons concerning piracy, including Faithful Warnings to prevent Fearful Judgments; Instructions to the Living, from the Condition of the Dead; The Converted Sinner… A Sermon Preached in Boston, May 31, 1724, In the Hearing and at the Desire of certain Pirates; A Brief Discourse occasioned by a Tragical Spectacle of a Number of Miserables under Sentence of Death for Piracy; Useful Remarks. An Essay upon Remarkables in the Way of Wicked Men and The Vial Poured Out Upon the Sea. His father Increase had preached at the trial of Dutch pirate Peter Roderigo; Cotton Mather in turn preached at the trials and sometimes executions of pirate Captains (or the crews of) William Fly, John Quelch, Samuel Bellamy, William Kidd, Charles Harris, and John Phillips. He also ministered to Thomas Hawkins, Thomas Pound, and William Coward; having been convicted of piracy, they were jailed alongside "Mary Glover the Irish Catholic witch," daughter of witch "Goody" Ann Glover at whose trial Mather had also preached.
In his conversations with William Fly and his crew Mather scolded them: "You have something within you, that will compell you to confess, That the Things which you have done, are most Unreasonable and Abominable. The Robberies and Piracies, you have committed, you can say nothing to Justify them. … It is a most hideous Article in the Heap of Guilt lying on you, that an Horrible Murder is charged upon you; There is a cry of Blood going up to Heaven against you."
Cotton Mather was twice widowed, and only two of his 15 children survived him. He died on the day after his 65th birthday and was buried on Copp's Hill Burying Ground, in Boston's North End.
Mather was a prolific writer and industrious in having his works printed, including a vast number of his sermons.
Mather's first published sermon, printed in 1686, concerned the execution of James Morgan, convicted of murder. Thirteen years later, Mather published the sermon in a compilation, along with other similar works, called Pillars of Salt.
Magnalia Christi Americana, considered Mather's greatest work, was published in 1702, when he was 39. The book includes several biographies of saints and describes the process of the New England settlement. In this context "saints" does not refer to the canonized saints of the Catholic church, but to those Puritan divines about whom Mather is writing. It comprises seven total books, including Pietas in Patriam: The life of His Excellency Sir William Phips, originally published anonymously in London in 1697. Despite being one of Mather's best-known works, some have openly criticized it, labeling it as hard to follow and understand, and poorly paced and organized. However, other critics have praised Mather's work, citing it as one of the best efforts at properly documenting the establishment of America and growth of the people.
In 1721, Mather published The Christian Philosopher, the first systematic book on science published in America. Mather attempted to show how Newtonian science and religion were in harmony. It was in part based on Robert Boyle's The Christian Virtuoso (1690). Mather reportedly took inspiration from Hayy ibn Yaqdhan, by the 12th-century Islamic philosopher Abu Bakr Ibn Tufail.
Despite condemning the "Mahometans" as infidels, Mather viewed the novel's protagonist, Hayy, as a model for his ideal Christian philosopher and monotheistic scientist. Mather viewed Hayy as a noble savage and applied this in the context of attempting to understand the Native American Indians, in order to convert them to Puritan Christianity. Mather's short treatise on the Lord's Supper was later translated by his cousin Josiah Cotton.
Marvel comics features a supervillain character named Cotton Mather with alias name, 'Witch-Slayer', that is an enemy of Spider-man. He first appears in the 1972 comic 'Marvel Team-Up' issue #41, and appears in the subsequent issues until issue #45.
The rock band Cotton Mather is named after Mather.
The Handsome Family's 2006 album Last Days of Wonder is named in reference to Mather's 1693 book Wonders of the Invisible World, which lyricist Rennie Sparks found intriguing because of what she called its "madness brimming under the surface of things."
Howard da Silva portrayed Mather in Burn, Witch, Burn, a December 15, 1975 episode of the CBS Radio Mystery Theater.
One of the stories in Richard Brautigan′s collection Revenge of the Lawn is called ″1692 Cotton Mather Newsreel″.
Seth Gabel portrays Cotton Mather in the TV series Salem, which aired from 2014 to 2017.
Notes
References | [
{
"paragraph_id": 0,
"text": "Cotton Mather FRS (/ˈmæðər/; February 12, 1663 – February 13, 1728) was a Puritan clergyman and author in colonial New England, who wrote extensively on theological, historical, and scientific subjects. After being educated at Harvard College, he joined his father Increase as minister of the Congregationalist Old North Meeting House in Boston, Massachusetts, where he preached for the rest of his life. He has been referred to as the \"first American Evangelical\".",
"title": ""
},
{
"paragraph_id": 1,
"text": "A major intellectual and public figure in English-speaking colonial America, Cotton Mather helped lead the successful revolt of 1689 against Sir Edmund Andros, the governor imposed on New England by King James II. Mather's subsequent involvement in the Salem witch trials of 1692–1693, which he defended in the book Wonders of the Invisible World (1693), attracted intense controversy in his own day and has negatively affected his historical reputation. As a historian of colonial New England, Mather is noted for his Magnalia Christi Americana (1702).",
"title": ""
},
{
"paragraph_id": 2,
"text": "Personally and intellectually committed to the waning social and religious orders in New England, Cotton Mather unsuccessfully sought the presidency of Harvard College. After 1702, Cotton Mather clashed with Joseph Dudley, the governor of the Province of Massachusetts Bay, whom Mather attempted unsuccessfully to drive out of power. Mather championed the new Yale College as an intellectual bulwark of Puritanism in New England. He corresponded extensively with European intellectuals and received an honorary Doctor of Divinity degree from the University of Glasgow in 1710.",
"title": ""
},
{
"paragraph_id": 3,
"text": "A promoter of the new experimental science in America, Cotton Mather carried out original research on plant hybridization. He also researched the variolation method of inoculation as a means of preventing smallpox contagion, which he learned about from an African-American slave that he owned, Onesimus. He dispatched many reports on scientific matters to the Royal Society of London, which elected him as a fellow in 1713. Mather's promotion of inoculation against smallpox caused violent controversy in Boston during the outbreak of 1721. Scientist and US founding father Benjamin Franklin, who as a young Bostonian had opposed the old Puritan order represented by Mather and participated in the anti-inoculation campaign, later described Mather's book Bonifacius, or Essays to Do Good (1710) as a major influence on his life.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Cotton Mather was born in 1663 in the city of Boston, the capital of the Massachusetts Bay Colony, to the Rev. Increase Mather and his wife Maria née Cotton. His grandfathers were Richard Mather and John Cotton, both of them prominent Puritan ministers who had played major roles in the establishment and growth of the Massachusetts colony. Richard Mather was a graduate of the University of Oxford and John Cotton a graduate of the University of Cambridge. Increase Mather was a graduate of Harvard College and the Trinity College Dublin, and served as the minister of Boston's original North Church (not to be confused with the Anglican Old North Church of Paul Revere fame). This was one of the two principal Congregationalist churches in the city, the other being the First Church established by John Winthrop. Cotton Mather was therefore born into one of the most influential and intellectually distinguished families in colonial New England and seemed destined to follow his father and grandfathers into the Puritan clergy.",
"title": "Early life and education"
},
{
"paragraph_id": 5,
"text": "Cotton entered Harvard College, in the neighboring town of Cambridge, in 1674. Aged only eleven and a half, he is the youngest student ever admitted to that institution. At around this time, Cotton began to be afflicted by stuttering, a speech disorder that he would struggle to overcome throughout the rest of his life. Bullied by the older students and fearing that his stutter would make him unsuitable as a preacher, Cotton withdrew temporarily from the College, continuing his education at home. He also took an interest in medicine and considered the possibility of pursuing a career as a physician rather than as a religious minister. Cotton eventually returned to Harvard and received his Bachelor of Arts degree in 1678, followed by a Master of Arts degree in 1681, the same year his father became Harvard President. At Harvard, Cotton studied Hebrew and the sciences.",
"title": "Early life and education"
},
{
"paragraph_id": 6,
"text": "After completing his education, Cotton joined his father's church as assistant pastor. In 1685, Cotton was ordained and assumed full responsibilities as co-pastor of the church. Father and son continued to share responsibility for the care of the congregation until the death of Increase in 1723. Cotton would die less than five years after his father, and was therefore throughout most of his career in the shadow of the respected and formidable Increase.",
"title": "Early life and education"
},
{
"paragraph_id": 7,
"text": "When Increase Mather became president of Harvard in 1692, he exercised considerable influence on the politics of the Massachusetts colony. Despite Cotton's efforts, he never became quite as influential as his father. One of the most public displays of their strained relationship emerged during the Salem witch trials, which Increase Mather reportedly did not support. Cotton did surpass his father's output as a writer, producing nearly 400 works.",
"title": "Early life and education"
},
{
"paragraph_id": 8,
"text": "Cotton Mather married Abigail Phillips, daughter of Colonel John Phillips of Charlestown, on May 4, 1686, when Cotton was twenty-three and Abigail was not quite sixteen years old. They had eight children.",
"title": "Personal life"
},
{
"paragraph_id": 9,
"text": "Abigail died of smallpox in 1702, having previously suffered a miscarriage. He married widow Elizabeth Hubbard in 1703. Like his first marriage, he was happily married to a very religious and emotionally stable woman. They had six children. Elizabeth, the couple's newborn twins, and a two-year-old daughter, Jerusha, all succumbed to a measles epidemic in 1713.",
"title": "Personal life"
},
{
"paragraph_id": 10,
"text": "On July 5, 1715, Mather married widow Lydia Lee George. Her daughter Katherine, wife of Nathan Howell, became a widow shortly after Lydia married Mather and she came to live with the newly married couple. Also living in the Mather household at that time were Mather's children Abigal (21), Hannah (18), Elizabeth (11), and Samuel (9). Initially, Mather wrote in his journal how lovely he found his wife and how much he enjoyed their discussions about scripture. Within a few years of their marriage, Lydia was subject to rages which left Mather humiliated and depressed. They clashed over Mather's piety and his mishandling of Nathan Howell's estate. He began to call her deranged. She left him for ten days, returning when she learned that Mather's son Increase was lost at sea. Lydia nursed him through illnesses, the last of which lasted five weeks and ended with his death on February 15, 1728. Of the children that Mather had with Abigail and Elizabeth, the only children to survive him were Hannah and Samuel. He did not have any children with Lydia.",
"title": "Personal life"
},
{
"paragraph_id": 11,
"text": "On May 14, 1686, ten days after Cotton Mather's marriage to Abigail Phillips, Edward Randolph disembarked in Boston bearing letters patent from King James II of England that revoked the Charter of the Massachusetts Bay Company and commissioned Randolph to reorganize the colonial government. James's intention was to curb Massachusetts's religious separatism by incorporating the colony it into a larger Dominion of New England, without an elected legislature and under a governor who would serve at the pleasure of the Crown. Later that year, the King appointed Sir Edmund Andros as governor of that new Dominion. This was a direct attack upon the Puritan religious and social orders that the Mathers represented, as well as upon the local autonomy of Massachusetts. The colonists were particularly outraged when Andros declared that all grants of land made in the name of the old Massachusetts Bay Company were invalid, forcing them to apply and pay for new royal patents on land that they already occupied or face eviction. In April 1687, Increase Mather sailed to London, where he remained for the next four years, pleading with the Court for what he regarded as the interests of the Massachusetts colony.",
"title": "Revolt of 1689"
},
{
"paragraph_id": 12,
"text": "The birth of a male heir to King James in June 1688, which could have cemented a Roman Catholic dynasty in the English throne, triggered the so-called Glorious Revolution in which Parliament deposed James and gave the Crown jointly to his Protestant daughter Mary and her husband, the Dutch Prince William of Orange. News of the events in London greatly emboldened the opposition in Boston to Governor Andros, finally precipitating the 1689 Boston revolt. Cotton Mather, then aged twenty-six, was one of the Puritan ministers who guided resistance in Boston to Andros's regime. Early in 1689, Randolph had a warrant issued for Cotton Mather's arrest on a charge of \"scandalous libel\", but the warrant was overruled by Wait Winthrop.",
"title": "Revolt of 1689"
},
{
"paragraph_id": 13,
"text": "According to some sources, Cotton Mather escaped a second attempted arrest on April 18, 1689, the same day that the people of Boston took up arms against Andros. The young Mather may have authored, in whole or in part, the \"Declaration of the Gentlemen, Merchants, and Inhabitants of Boston and the Country Adjacent\", which justified that uprising by a list of grievances that the declaration attributed to the deposed officials. The authorship of that document is uncertain: it was not signed by Mather or any other clergymen, and Puritans frowned upon the clergy being seen to play too direct and personal a hand in political affairs. That day, Mather probably read the Declaration to a crowd gathered in front of the Boston Town House.",
"title": "Revolt of 1689"
},
{
"paragraph_id": 14,
"text": "In July, Andros, Randolph, Joseph Dudley, and other officials who had been deposed and arrested in the Boston revolt were summoned to London to answer the complaints against them. The administration of Massachusetts was temporarily assumed by Simon Bradstreet, whose rule proved weak and contentious. In 1691, the government of King William and Queen Mary issued a new Massachusetts Charter. This charter united the Massachusetts Bay Colony with Plymouth Colony into the new Province of Massachusetts Bay. Rather than restoring the old Puritan rule, the Charter of 1691 mandated religious toleration for all non-Catholics and established a government led by a Crown-appointed governor. The first governor under the new charter was Sir William Phips, who was a member of the Mathers' church in Boston.",
"title": "Revolt of 1689"
},
{
"paragraph_id": 15,
"text": "Cotton Mather's reputation, in his own day as well as in the historiography and popular culture of subsequent generations, has been very adversely affected by his association with the events surrounding the Salem witch trials of 1692–1693. As a consequence of those trials, nineteen people were executed by hanging for practicing witchcraft and one was pressed to death for refusing to enter a plea before the court. Although Mather had no official role in the legal proceedings, he wrote the book Wonders of the Invisible World, which appeared in 1693 with the endorsement of William Stoughton, the Lieutenant Governor of Massachusetts and chief judge of the Salem witch trials. Mather's book constitutes the most detailed written defense of the conduct of those trials. Mather's role in drumming up and sustaining the witch hysteria behind those proceedings was denounced by Robert Calef in his book More Wonders of the Invisible World, published in 1700. In the 19th century, Nathaniel Hawthorne called Mather \"the chief agent of the mischief\" at Salem.",
"title": "Involvement with the Salem witch trials"
},
{
"paragraph_id": 16,
"text": "More recently historians have tended to downplay Mather's role in the events at Salem. According to Jan Stievermann, of the Heidelberg Center for American Studies,",
"title": "Involvement with the Salem witch trials"
},
{
"paragraph_id": 17,
"text": "unlike some other ministers [Cotton Mather] never called for an end to the trials, and he afterwards wrote New England's official defense of the court's proceedings, the infamous Wonders of the Invisible World (1693). Still, there is now a general agreement that his beliefs were very typical of the period, that he acted as a moderating force in the context of the trials, and that he never directly participated in the proceedings. He advised the judges against using spectral evidence and offered recommendations to proceed with caution lest innocent people come to harm. In the end, Mather's role in the witchcraft episode was thus ambivalent and conflicted.",
"title": "Involvement with the Salem witch trials"
},
{
"paragraph_id": 18,
"text": "In 1689, Mather published Memorable Providences, Relating to Witchcrafts and Possessions, based on his study of events surrounding the affliction of the children of a Boston mason named John Goodwin. Those afflictions had begun after Goodwin's eldest daughter confronted a washerwoman whom she suspected of stealing some of the family's linen. In response to this, the washerwoman's mother, Ann Glover, verbally insulted the Goodwin girl, who soon began to suffer from hysterical fits that later began to afflict also the three other Goodwin children. Glover was an Irish Catholic widow who could understand English but spoke only Gaelic. Interrogated by the magistrates, she admitted that she tormented her enemies by stroking certain images or dolls with her finger wetted with spittle. After she was sentenced to death for witchcraft, Mather visited her in prison and interrogated her through an interpreter.",
"title": "Involvement with the Salem witch trials"
},
{
"paragraph_id": 19,
"text": "Before her execution, Glover warned that her death would not bring relief to the Goodwin children, as she was not the one responsible for their torments. Indeed, after Glover was hanged the children's afflictions increased. Mather documented these events and attempted to de-possess the \"Haunted Children\" by prayer and fasting. He also took in the eldest Goodwin child, Martha, into his own home, where she lived for several weeks. Eventually, the afflictions ceased and Martha was admitted into Mather's church.",
"title": "Involvement with the Salem witch trials"
},
{
"paragraph_id": 20,
"text": "The publication of Mather's Memorable Providences attracted attention on both sides of the Atlantic, including from the eminent English Puritan Richard Baxter. In his book, Mather argued that since there are witches and devils, there are \"immortal souls\". He also claimed that witches appear spectrally as themselves. He opposed any natural explanations for the fits, believed that people who confessed to using witchcraft were sane, and warned against all magical practices due to their diabolical connections.",
"title": "Involvement with the Salem witch trials"
},
{
"paragraph_id": 21,
"text": "Mather's contemporary Robert Calef would later accuse Mather of laying the groundwork, with his Memorable Providences, for the witchcraft hysteria that gripped Salem three years later:",
"title": "Involvement with the Salem witch trials"
},
{
"paragraph_id": 22,
"text": "Mr Cotton Mather, was the most active and forward of any Minister in the Country in those matters, taking home one of the Children, and managing such Intreagues with that Child, and after printing such an account of the whole, in his Memorable Providences, as conduced much to the kindling of those Flames, that in Sir William's time threatened the devouring of this Country.",
"title": "Involvement with the Salem witch trials"
},
{
"paragraph_id": 23,
"text": "Similar views, on Mather's responsibility for the climate of hysteria over witchcraft that led to the Salem trials, were repeated by later commentators, such as the politician and historian Charles W. Upham in the 19th century.",
"title": "Involvement with the Salem witch trials"
},
{
"paragraph_id": 24,
"text": "When the accusations of witchcraft arose in Salem Village in 1692, Cotton Mather was incapacitated by a serious illness, which he attributed to overwork. He suggested that the afflicted girls be separated and offered to take six of them into his home, as he had done previously with Martha Goodwin. That offer was not accepted.",
"title": "Involvement with the Salem witch trials"
},
{
"paragraph_id": 25,
"text": "In May of that year, Sir William Phips, governor of the newly chartered Province of Massachusetts Bay, appointed a special \"Court of Oyer and Terminer\" to try the cases of witchcraft in Salem. The chief judge of that court was Phips's lieutenant governor, William Stoughton. Stoughton had close ties to the Mathers and had been recommended as Governor Phips's lieutenant by Increase Mather.",
"title": "Involvement with the Salem witch trials"
},
{
"paragraph_id": 26,
"text": "Another of the judges in the new court, John Richards, requested that Cotton Mather accompany him to Salem, but Mather refused due to his ill health. Instead, Mather wrote a long letter to Richards in which he gave his advice on the impending trials. In that letter, Mather states that witches guilty of the most grievous crimes should be executed, but that witches convicted of lesser offenses deserve more lenient punishment. He also wrote that the identification and conviction of all witches should be undertaken with the greatest caution and warned against the use of spectral evidence (i.e., testimony that the specter of the accused had tormented a victim) on the grounds that devils could assume the form of innocent and even virtuous people. Under English law, spectral evidence had been admissible in witchcraft trials for a century before the events in Salem, and it would remain admissible until 1712. There was, however, debate among experts as to how much weight should be given to such testimonies.",
"title": "Involvement with the Salem witch trials"
},
{
"paragraph_id": 27,
"text": "On June 10, 1692, Bridget Bishop, the thrice-married owner of an unlicensed tavern, was hanged after being convicted and sentenced by the Court of Oyer and Terminer, based largely on spectral evidence. A group of twelve Puritan ministers issued a statement, drawn up by Cotton Mather and presented to Governor Phips and his council a few days later, entitled The Return of Several Ministers. In that document, Mather criticized the court's reliance on spectral evidence and recommended that it adopt a more cautious procedure. However, he ended the document with a statement defending the continued prosecution of witchcraft according to the \"Direction given by the Laws of God, and the wholesome Statues of the English Nation\". Robert Calef would later criticize Mather's intervention in The Return of Several Ministers as \"perfectly ambidexter, giving a great or greater encouragement to proceed in those dark methods, than cautions against them.\"",
"title": "Involvement with the Salem witch trials"
},
{
"paragraph_id": 28,
"text": "On August 4, Cotton Mather preached a sermon before his North Church congregation on the text of Revelation 12:12: \"Woe to the Inhabitants of the Earth, and of the Sea; for the Devil is come down unto you, having great Wrath; because he knoweth, that he hath but a short time.\" In the sermon, Mather claimed that the witches \"have associated themselves to do no less a thing than to destroy the Kingdom of our Lord Jesus Christ, in these parts of the World.\" Although he did not intervene in any of the trials, there are some testimonies that Mather was present at the executions that were carried out in Salem on August 19. According to his Mather's contemporary critic Robert Calef, the crowd was disturbed by George Burroughs's eloquent declarations of innocence from the scaffold and by his recitation of the Lord's Prayer, of which witches were commonly believed to be incapable. Calef claimed that, after Burroughs had been hanged,",
"title": "Involvement with the Salem witch trials"
},
{
"paragraph_id": 29,
"text": "Mr. Cotton Mather, being mounted upon a Horse, addressed himself to the People, partly to declare that [Burroughs] was no ordained Minister, partly to possess the People of his guilt, saying that the devil often had been transformed into the Angel of Light. And this did somewhat appease the People, and the Executions went on.",
"title": "Involvement with the Salem witch trials"
},
{
"paragraph_id": 30,
"text": "As public discontent with the witch trials grew in the summer of 1692, threatening civil unrest, the conservative Cotton Mather felt compelled to defend the responsible authorities. On September 2, 1692, after eleven people had been executed as witches, Cotton Mather wrote a letter to Judge Stoughton congratulating him on \"extinguishing of as wonderful a piece of devilism as has been seen in the world\". As the opposition to the witch trials was bringing them to a halt, Mather wrote Wonders of the Invisible World, a defense of the trials that carried Stoughton's official approval.",
"title": "Involvement with the Salem witch trials"
},
{
"paragraph_id": 31,
"text": "Mather's Wonders did little to appease the growing clamor against the Salem witch trials. At around the same time that the book began to circulate in manuscript form, Governor Phips decided to restrict greatly the use of spectral evidence, thus raising a great barrier against further convictions. The Court of Oyer and Terminer was dismissed on October 29. A new court convened on January 1693 to hear the remaining cases, almost all of which ended in acquittal. In May, Governor Phips issued a general pardon, thus bringing the witch trials to an end.",
"title": "Involvement with the Salem witch trials"
},
{
"paragraph_id": 32,
"text": "The last major events in Mather's involvement with witchcraft were his interactions with Mercy Short in December 1692 and Margaret Rule in September 1693. Mather appears to have remained convinced that genuine witches had been executed in Salem and he never publicly expressed regrets over his role in those events. Robert Calef, an otherwise obscure Boston merchant, published More Wonders of the Invisible World in 1700, bitterly attacking Cotton Mather over his role in the events of 1692. In the words of 20th-century historian Samuel Eliot Morison, \"Robert Calef tied a tin can to Cotton Mather which has rattled and banged through the pages of superficial and popular historians\". Intellectual historian Reiner Smolinski, an expert on the writings of Cotton Mather, found it \"deplorable that Mather's reputation is still overshadowed by the specter of Salem witchcraft.\"",
"title": "Involvement with the Salem witch trials"
},
{
"paragraph_id": 33,
"text": "Cotton Mather was an extremely prolific writer, producing 388 different books and pamphlets during his lifetime. His most widely distributed work was Magnalia Christi Americana (which may be translated as \"The Glorious Works of Christ in America\"), subtitled \"The ecclesiastical history of New England, from its first planting in the year 1620 unto the year of Our Lord 1698. In seven books.\" Despite the Latin title, the work is written in English. Mather began working on it towards the end of 1693 and it was finally published in London in 1702. The work incorporates information that Mather put together from a variety of sources, such as letters, diaries, sermons, Harvard College records, personal conversations, and the manuscript histories composed by William Hubbard and William Bradford. The Magnalia includes about fifty biographies of eminent New Englanders (ranging from John Eliot, the first Puritan missionary to the Native Americans, to Sir William Phips, the incumbent governor of Massachusetts at the time that Mather began writing), plus dozens of brief biographical sketches, including those of Hannah Duston and Hannah Swarton.",
"title": "Historical and theological writings"
},
{
"paragraph_id": 34,
"text": "According to Kenneth Silverman, an expert on early American literature and Cotton Mather's biographer,",
"title": "Historical and theological writings"
},
{
"paragraph_id": 35,
"text": "If the epic ambitions of Magnalia, its attempt to put American on the cultural map, recall such later American works as Moby-Dick (to which it has been compared), its effort to rejoin provincial America to the mainstream of English culture recalls rather The Waste Land. Genuinely Anglo-American in outlook, the book projects a New England which is ultimately an enlarged version of Cotton Mather himself, a pious citizen of \"The Metropolis of the whole English America\".",
"title": "Historical and theological writings"
},
{
"paragraph_id": 36,
"text": "Silverman argues that, although Mather glorifies New England's Puritan past, in the Magnalia he also attempts to transcend the religious separatism of the old Puritan settlers, reflecting Mather's more ecumenical and cosmopolitan embrace of a Transatlantic Protestant Christianity that included, in addition to Mather's own Congregationalists, also Presbyterians, Baptists, and low church Anglicans.",
"title": "Historical and theological writings"
},
{
"paragraph_id": 37,
"text": "In 1693 Mather also began work on a grand intellectual project that he titled Biblia Americana, which sought to provide a commentary and interpretation of the Christian Bible in light of \"all of the Learning in the World\". Mather, who continued to work on it for many years, sought to incorporate into his reading of Scripture the new scientific knowledge and theories, including geography, heliocentrism, atomism, and Newtonianism. According to Silverman, the project \"looks forward to Mather's becoming probably the most influential spokesman in New England for a rationalized, scientized Christianity.\" Mather could not find a publisher for the Biblia Americana, which remained in manuscript form during his lifetime. It is currently being edited in ten volumes, published by Mohr Siebeck under the direction of Reiner Smolinski and Jan Stievermann. As of 2023, seven of the ten volumes have appeared in print.",
"title": "Historical and theological writings"
},
{
"paragraph_id": 38,
"text": "In Massachusetts at the start of the 18th century, Joseph Dudley was a highly controversial figure, as he had participated actively in the government of Sir Edmund Andros in 1686–1689. Dudley was among those arrested in the revolt of 1689, and was later called to London to answer the charges against him brought by a committee of the colonists. However, Dudley was able to pursue a successful political career in Britain. Upon the death in 1701 of acting governor William Stoughton, Dudley began enlisting support in London to procure appointment as the new governor of Massachusetts.",
"title": "Conflict with Governor Dudley"
},
{
"paragraph_id": 39,
"text": "Although the Mathers (to whom he was related by marriage), continued to resent Dudley's role in the Andros administration, they eventually came around to the view that Dudley would now be preferable as governor to the available alternatives, at a time when the English Parliament was threatening to repeal the Massachusetts Charter. With the Mathers' support, Dudley was appointed governor by the Crown and returned to Boston in 1702. Contrary to the promises that he had made to the Mathers, Governor Dudley proved a divisive and high-handed executive, reserving his patronage for a small circle composed of transatlantic merchants, Anglicans, and religious liberals such as Thomas Brattle, Benjamin Colman, and John Leverett.",
"title": "Conflict with Governor Dudley"
},
{
"paragraph_id": 40,
"text": "In the context of Queen Anne's War (1702–1713), Cotton Mather preached and published against Governor Dudley, whom Mather accused of corruption and misgovernment. Mather sought unsuccessfully to have Dudley replaced by Sir Charles Hobby. Outmaneuvered by Dudley, this political rivalry left Mather increasingly isolated at a time when Massachusetts society was steadily moving away from the Puritan tradition that Mather represented.",
"title": "Conflict with Governor Dudley"
},
{
"paragraph_id": 41,
"text": "Cotton Mather was a fellow of Harvard College from 1690 to 1702, and at various times sat on its Board of Overseers. His father Increase had succeeded John Rogers as president of Harvard in 1684, first as acting president (1684–1686), later with the title of \"rector\" (1686–1692, during much of which period he was away from Massachusetts, pleading the Puritans' case before the Royal Court in London), and finally with the full title of president (1692–1701). Increase was unwilling to move permanently to the Harvard campus in Cambridge, Massachusetts, since his congregation in Boston was much larger than the Harvard student body, which at the time counted only a few dozen. Instructed by a committee of the Massachusetts General Assembly that the president of Harvard had to reside in Cambridge and preach to the students in person, Increase resigned in 1701 and was replaced by the Rev. Samuel Willard as acting president.",
"title": "Relationship with Harvard and Yale"
},
{
"paragraph_id": 42,
"text": "Cotton Mather sought the presidency of Harvard, but in 1708 the fellows instead appointed a layman, John Leverett, who had the support of Governor Dudley. The Mathers disapproved of the increasing independence and liberalism of the Harvard faculty, which they regarded as laxity. Cotton Mather came to see the Collegiate School, which had moved in 1716 from Saybrook to New Haven, Connecticut, as a better vehicle for preserving the Puritan orthodoxy in New England. In 1718, Cotton convinced Boston-born British businessman Elihu Yale to make a charitable gift sufficient to ensure the school's survival. It was also Mather who suggested that the school change its name to Yale College after it accepted that donation.",
"title": "Relationship with Harvard and Yale"
},
{
"paragraph_id": 43,
"text": "Cotton Mather sought the presidency of Harvard again after Leverett's death in 1724, but the fellows offered the position to the Rev. Joseph Sewall (son of Judge Samuel Sewall, who had repented publicly for his role in the Salem witch trials). When Sewall turned it down, Mather once again hoped that he might get the appointment. Instead, the fellows offered it to one of its own number, the Rev. Benjamin Coleman, an old rival of Mather. When Coleman refused it, the presidency went finally to the Rev. Benjamin Wadsworth.",
"title": "Relationship with Harvard and Yale"
},
{
"paragraph_id": 44,
"text": "The practice of smallpox inoculation (as distinguished from to the later practice of vaccination) was developed possibly in 8th-century India or 10th-century China and by the 17th-century had reached Turkey. It was also practiced in western Africa, but it is not known when it started there. Inoculation or, rather, variolation, involved infecting a person via a cut in the skin with exudate from a patient with a relatively mild case of smallpox (variola), to bring about a manageable and recoverable infection that would provide later immunity. By the beginning of the 18th century, the Royal Society in England was discussing the practice of inoculation, and the smallpox epidemic in 1713 spurred further interest. It was not until 1721, however, that England recorded its first case of inoculation.",
"title": "Advocacy for smallpox inoculation"
},
{
"paragraph_id": 45,
"text": "Smallpox was a serious threat in colonial America, most devastating to Native Americans, but also to Anglo-American settlers. New England suffered smallpox epidemics in 1677, 1689–90, and 1702. It was highly contagious, and mortality could reach as high as 30 percent. Boston had been plagued by smallpox outbreaks in 1690 and 1702. During this era, public authorities in Massachusetts dealt with the threat primarily by means of quarantine. Incoming ships were quarantined in Boston Harbor, and any smallpox patients in town were held under guard or in a \"pesthouse\".",
"title": "Advocacy for smallpox inoculation"
},
{
"paragraph_id": 46,
"text": "In 1716, Onesimus, one of Mather's slaves, explained to Mather how he had been inoculated as a child in Africa. Mather was fascinated by the idea. By July 1716, he had read an endorsement of inoculation by Dr Emanuel Timonius of Constantinople in the Philosophical Transactions. Mather then declared, in a letter to Dr John Woodward of Gresham College in London, that he planned to press Boston's doctors to adopt the practice of inoculation should smallpox reach the colony again.",
"title": "Advocacy for smallpox inoculation"
},
{
"paragraph_id": 47,
"text": "By 1721, a whole generation of young Bostonians was vulnerable and memories of the last epidemic's horrors had by and large disappeared. Smallpox returned on April 22 of that year, when HMS Seahorse arrived from the West Indies carrying smallpox on board. Despite attempts to protect the town through quarantine, nine known cases of smallpox appeared in Boston by May 27, and by mid-June, the disease was spreading at an alarming rate. As a new wave of smallpox hit the area and continued to spread, many residents fled to outlying rural settlements. The combination of exodus, quarantine, and outside traders' fears disrupted business in the capital of the Bay Colony for weeks. Guards were stationed at the House of Representatives to keep Bostonians from entering without special permission. The death toll reached 101 in September, and the Selectmen, powerless to stop it, \"severely limited the length of time funeral bells could toll.\" As one response, legislators delegated a thousand pounds from the treasury to help the people who, under these conditions, could no longer support their families.",
"title": "Advocacy for smallpox inoculation"
},
{
"paragraph_id": 48,
"text": "On June 6, 1721, Mather sent an abstract of reports on inoculation by Timonius and Jacobus Pylarinus to local physicians, urging them to consult about the matter. He received no response. Next, Mather pleaded his case to Dr. Zabdiel Boylston, who tried the procedure on his youngest son and two slaves—one grown and one a boy. All recovered in about a week. Boylston inoculated seven more people by mid-July. The epidemic peaked in October 1721, with 411 deaths; by February 26, 1722, Boston was again free from smallpox. The total number of cases since April 1721 came to 5,889, with 844 deaths—more than three-quarters of all the deaths in Boston during 1721. Meanwhile, Boylston had inoculated 287 people, with six resulting deaths.",
"title": "Advocacy for smallpox inoculation"
},
{
"paragraph_id": 49,
"text": "Boylston and Mather's inoculation crusade \"raised a horrid Clamour\" among the people of Boston. Both Boylston and Mather were \"Object[s] of their Fury; their furious Obloquies and Invectives\", which Mather acknowledges in his diary. Boston's Selectmen, consulting a doctor who claimed that the practice caused many deaths and only spread the infection, forbade Boylston from performing it again.",
"title": "Advocacy for smallpox inoculation"
},
{
"paragraph_id": 50,
"text": "The New-England Courant published writers who opposed the practice. The editorial stance was that the Boston populace feared that inoculation spread, rather than prevented, the disease; however, some historians, notably H. W. Brands, have argued that this position was a result of the contrarian positions of editor-in-chief James Franklin (a brother of Benjamin Franklin). Public discourse ranged in tone from organized arguments by John Williams from Boston, who posted that \"several arguments proving that inoculating the smallpox is not contained in the law of Physick, either natural or divine, and therefore unlawful\", to those put forth in a pamphlet by Dr. William Douglass of Boston, entitled The Abuses and Scandals of Some Late Pamphlets in Favour of Inoculation of the Small Pox (1721), on the qualifications of inoculation's proponents. (Douglass was exceptional at the time for holding a medical degree from Europe.) At the extreme, in November 1721, someone hurled a lighted grenade into Mather's home.",
"title": "Advocacy for smallpox inoculation"
},
{
"paragraph_id": 51,
"text": "Several opponents of smallpox inoculation, among them John Williams, stated that there were only two laws of physick (medicine): sympathy and antipathy. In his estimation, inoculation was neither a sympathy toward a wound or a disease, or an antipathy toward one, but the creation of one. For this reason, its practice violated the natural laws of medicine, transforming health care practitioners into those who harm rather than heal.",
"title": "Advocacy for smallpox inoculation"
},
{
"paragraph_id": 52,
"text": "As with most colonists, Williams' Puritan beliefs were enmeshed in every aspect of his life, and he used the Bible to state his case. He quoted Matthew 9:12, when Jesus said: \"It is not the healthy who need a doctor, but the sick.\" William Douglass proposed a more secular argument against inoculation, stressing the importance of reason over passion and urging the public to be pragmatic in their choices. In addition, he demanded that ministers leave the practice of medicine to physicians, and not meddle in areas where they lacked expertise. According to Douglass, smallpox inoculation was \"a medical experiment of consequence,\" one not to be undertaken lightly. He believed that not all learned individuals were qualified to doctor others, and while ministers took on several roles in the early years of the colony, including that of caring for the sick, they were now expected to stay out of state and civil affairs. Douglass felt that inoculation caused more deaths than it prevented. The only reason Mather had had success in it, he said, was because Mather had used it on children, who are naturally more resilient. Douglass vowed to always speak out against \"the wickedness of spreading infection\". Speak out he did: \"The battle between these two prestigious adversaries [Douglass and Mather] lasted far longer than the epidemic itself, and the literature accompanying the controversy was both vast and venomous.\"",
"title": "Advocacy for smallpox inoculation"
},
{
"paragraph_id": 53,
"text": "Generally, Puritan pastors favored the inoculation experiments. Increase Mather, Cotton's father, was joined by prominent pastors Benjamin Colman and William Cooper in openly propagating the use of inoculations. \"One of the classic assumptions of the Puritan mind was that the will of God was to be discerned in nature as well as in revelation.\" Nevertheless, Williams questioned whether the smallpox \"is not one of the strange works of God; and whether inoculation of it be not a fighting with the most High.\" He also asked his readers if the smallpox epidemic may have been given to them by God as \"punishment for sin,\" and warned that attempting to shield themselves from God's fury (via inoculation), would only serve to \"provoke him more\".",
"title": "Advocacy for smallpox inoculation"
},
{
"paragraph_id": 54,
"text": "Puritans found meaning in affliction, and they did not yet know why God was showing them disfavor through smallpox. Not to address their errant ways before attempting a cure could set them back in their \"errand\". Many Puritans believed that creating a wound and inserting poison was doing violence and therefore was antithetical to the healing art. They grappled with adhering to the Ten Commandments, with being proper church members and good caring neighbors. The apparent contradiction between harming or murdering a neighbor through inoculation and the Sixth Commandment—\"thou shalt not kill\"—seemed insoluble and hence stood as one of the main objections against the procedure. Williams maintained that because the subject of inoculation could not be found in the Bible, it was not the will of God, and therefore \"unlawful.\" He explained that inoculation violated The Golden Rule, because if one neighbor voluntarily infected another with disease, he was not doing unto others as he would have done to him. With the Bible as the Puritans' source for all decision-making, lack of scriptural evidence concerned many, and Williams vocally scorned Mather for not being able to reference an inoculation edict directly from the Bible.",
"title": "Advocacy for smallpox inoculation"
},
{
"paragraph_id": 55,
"text": "With the smallpox epidemic catching speed and racking up a staggering death toll, a solution to the crisis was becoming more urgently needed by the day. The use of quarantine and various other efforts, such as balancing the body's humors, did not slow the spread of the disease. As news rolled in from town to town and correspondence arrived from overseas, reports of horrific stories of suffering and loss due to smallpox stirred mass panic among the people. \"By circa 1700, smallpox had become among the most devastating of epidemic diseases circulating in the Atlantic world.\"",
"title": "Advocacy for smallpox inoculation"
},
{
"paragraph_id": 56,
"text": "Mather strongly challenged the perception that inoculation was against the will of God and argued the procedure was not outside of Puritan principles. He wrote that \"whether a Christian may not employ this Medicine (let the matter of it be what it will) and humbly give Thanks to God's good Providence in discovering of it to a miserable World; and humbly look up to His Good Providence (as we do in the use of any other Medicine) It may seem strange, that any wise Christian cannot answer it. And how strangely do Men that call themselves Physicians betray their Anatomy, and their Philosophy, as well as their Divinity in their invectives against this Practice?\" The Puritan minister began to embrace the sentiment that smallpox was an inevitability for anyone, both the good and the wicked, yet God had provided them with the means to save themselves. Mather reported that, from his view, \"none that have used it ever died of the Small Pox, tho at the same time, it were so malignant, that at least half the People died, that were infected With it in the Common way.\"",
"title": "Advocacy for smallpox inoculation"
},
{
"paragraph_id": 57,
"text": "While Mather was experimenting with the procedure, prominent Puritan pastors Benjamin Colman and William Cooper expressed public and theological support for them. The practice of smallpox inoculation was eventually accepted by the general population due to first-hand experiences and personal relationships. Although many were initially wary of the concept, it was because people were able to witness the procedure's consistently positive results, within their own community of ordinary citizens, that it became widely utilized and supported. One important change in the practice after 1721 was regulated quarantine of inoculees.",
"title": "Advocacy for smallpox inoculation"
},
{
"paragraph_id": 58,
"text": "Although Mather and Boylston were able to demonstrate the efficacy of the practice, the debate over inoculation would continue even beyond the epidemic of 1721–22. After overcoming considerable difficulty and achieving notable success, Boylston traveled to London in 1725, where he published his results and was elected to the Royal Society in 1726, with Mather formally receiving the honor two years prior.",
"title": "Advocacy for smallpox inoculation"
},
{
"paragraph_id": 59,
"text": "In 1716, Mather used different varieties of maize (\"Indian corn\") to conduct one of the first recorded experiments on plant hybridization. He described the results in a letter to his friend James Petiver:",
"title": "Other scientific work"
},
{
"paragraph_id": 60,
"text": "First: my Friend planted a Row of Indian corn that was Coloured Red and Blue; the rest of the Field being planted with corn of the yellow, which is the most usual color. To the Windward side, this Red and Blue Row, so infected Three or Four whole Rows, as to communicate the same Colour unto them; and part of ye Fifth and some of ye Sixth. But to the Leeward Side, no less than Seven or Eight Rows, had ye same Colour communicated unto them; and some small Impressions were made on those that were yet further off.",
"title": "Other scientific work"
},
{
"paragraph_id": 61,
"text": "In his Curiosa Americana (1712–1724) collection, Mather also announced that flowering plants reproduce sexually, an observation that later became the basis of the Linnaean system of plant classification. Mather may also have been the first to develop the concept of genetic dominance, which later would underpin Mendelian genetics.",
"title": "Other scientific work"
},
{
"paragraph_id": 62,
"text": "In 1713, the Secretary of the Royal Society of London, naturalist Richard Waller, informed Mather that he had been elected as a fellow of the Society. Mather was the eighth colonial American to join that learned body, with the first having been John Winthrop the Younger in 1662. During the controversies surrounding Mather's smallpox inoculation campaign of 1721, his adversaries questioned that credential on the grounds that Mather's name did not figure in the published lists of the Society's members. At the time, the Society responded that those published lists included only members who had been inducted in person and who were therefore entitled to vote in the Society's yearly elections. In May 1723, Mather's correspondent John Woodward discovered that, although Mather had been duly nominated in 1713, approved by the council, and informed by Waller of his election at that time, due to an oversight the nomination had not in fact been voted upon by the full assembly of fellows or the vote had not been recorded. After Woodward informed the Society of the situation, the members proceeded to elect Mather by a formal vote.",
"title": "Other scientific work"
},
{
"paragraph_id": 63,
"text": "Mather's enthusiasm for experimental science was strongly influenced by his reading of Robert Boyle's work. Mather was a significant popularizer of the new scientific knowledge and promoted Copernican heliocentrism in some of his sermons. He also argued against the spontaneous generation of life and compiled a medical manual titled The Angel of Bethesda that he hoped would assist people who were unable to procure the services of a physician, but which went unpublished in Mather's lifetime. This was the only comprehensive medical work written in colonial English-speaking America. Although much of what Mather included in that manual were folk remedies now regarded as unscientific or superstitious, some of them are still valid, including smallpox inoculation and the use of citrus juice to treat scurvy. Mather also outlined an early form of germ theory and discussed psychogenic diseases, while recommending hygiene, physical exercise, temperate diet, and avoidance of tobacco smoking.",
"title": "Other scientific work"
},
{
"paragraph_id": 64,
"text": "In his later years, Mather also promoted the professionalization of scientific research in America. He presented a Boston tradesman named Grafton Feveryear with the barometer that Feveryear used to make the first quantitative meteorological observations in New England, which he communicated to the Royal Society in 1727. Mather also sponsored Isaac Greenwood, a Harvard graduate and member of Mather's church, who travelled to London and collaborated with the Royal Society's curator of experiments, John Theophilus Desaguliers. Greenwood later became the first Hollis professor of mathematics and natural philosophy at Harvard, and may well have been the first American to practice science professionally.",
"title": "Other scientific work"
},
{
"paragraph_id": 65,
"text": "Cotton Mather's household included both free servants and a number of slaves who performed domestic chores. Surviving records indicate that, over the course of his lifetime, Mather owned at least three, and probably more, slaves. Like the vast majority of Christians at the time, but unlike his political rival Judge Samuel Sewall, Mather was never an abolitionist, although he did publicly denounce what he regarded as the illegal and inhuman aspects of the burgeoning Atlantic slave trade. In his book The Negro Christianized (1706), Mather insisted that slaveholders should treat their black slaves humanely and instruct them in Christianity with a view to promoting their salvation. Mather received black members of his congregation in his home and he paid a schoolteacher to instruct local black people in reading.",
"title": "Slavery and racial attitudes"
},
{
"paragraph_id": 66,
"text": "Mather consistently held that black Africans were \"of one Blood\" with the rest of mankind and that blacks and whites would meet as equals in Heaven. After a number of black people carried out arson attacks in Boston in 1723, Mather asked the outraged white Bostonians whether the black population had been \"always treated according to the Rules of Humanity? Are they treated as those, that are of one Blood with us, and those who have Immortal Souls in them, and are not mere Beasts of Burden?\"",
"title": "Slavery and racial attitudes"
},
{
"paragraph_id": 67,
"text": "Mather advocated the Christianization of black slaves both on religious grounds and as tending to make them more patient and faithful servants of their masters. In The Negro Christianized, Mather argued against the opinion of Richard Baxter that a Christian could not enslave another baptized Christian. The African slave Onesimus, from whom Mather first learned about smallpox inoculation, had been purchased for him as a gift by his congregation in 1706. Despite his efforts, Mather was unable to convert Onesimus to Christianity and finally manumitted him in 1716.",
"title": "Slavery and racial attitudes"
},
{
"paragraph_id": 68,
"text": "Throughout his career Mather was also keen to minister to convicted pirates. He produced a number of pamphlets and sermons concerning piracy, including Faithful Warnings to prevent Fearful Judgments; Instructions to the Living, from the Condition of the Dead; The Converted Sinner… A Sermon Preached in Boston, May 31, 1724, In the Hearing and at the Desire of certain Pirates; A Brief Discourse occasioned by a Tragical Spectacle of a Number of Miserables under Sentence of Death for Piracy; Useful Remarks. An Essay upon Remarkables in the Way of Wicked Men and The Vial Poured Out Upon the Sea. His father Increase had preached at the trial of Dutch pirate Peter Roderigo; Cotton Mather in turn preached at the trials and sometimes executions of pirate Captains (or the crews of) William Fly, John Quelch, Samuel Bellamy, William Kidd, Charles Harris, and John Phillips. He also ministered to Thomas Hawkins, Thomas Pound, and William Coward; having been convicted of piracy, they were jailed alongside \"Mary Glover the Irish Catholic witch,\" daughter of witch \"Goody\" Ann Glover at whose trial Mather had also preached.",
"title": "Sermons against pirates and piracy"
},
{
"paragraph_id": 69,
"text": "In his conversations with William Fly and his crew Mather scolded them: \"You have something within you, that will compell you to confess, That the Things which you have done, are most Unreasonable and Abominable. The Robberies and Piracies, you have committed, you can say nothing to Justify them. … It is a most hideous Article in the Heap of Guilt lying on you, that an Horrible Murder is charged upon you; There is a cry of Blood going up to Heaven against you.\"",
"title": "Sermons against pirates and piracy"
},
{
"paragraph_id": 70,
"text": "Cotton Mather was twice widowed, and only two of his 15 children survived him. He died on the day after his 65th birthday and was buried on Copp's Hill Burying Ground, in Boston's North End.",
"title": "Death and place of burial"
},
{
"paragraph_id": 71,
"text": "Mather was a prolific writer and industrious in having his works printed, including a vast number of his sermons.",
"title": "Works"
},
{
"paragraph_id": 72,
"text": "Mather's first published sermon, printed in 1686, concerned the execution of James Morgan, convicted of murder. Thirteen years later, Mather published the sermon in a compilation, along with other similar works, called Pillars of Salt.",
"title": "Works"
},
{
"paragraph_id": 73,
"text": "Magnalia Christi Americana, considered Mather's greatest work, was published in 1702, when he was 39. The book includes several biographies of saints and describes the process of the New England settlement. In this context \"saints\" does not refer to the canonized saints of the Catholic church, but to those Puritan divines about whom Mather is writing. It comprises seven total books, including Pietas in Patriam: The life of His Excellency Sir William Phips, originally published anonymously in London in 1697. Despite being one of Mather's best-known works, some have openly criticized it, labeling it as hard to follow and understand, and poorly paced and organized. However, other critics have praised Mather's work, citing it as one of the best efforts at properly documenting the establishment of America and growth of the people.",
"title": "Works"
},
{
"paragraph_id": 74,
"text": "In 1721, Mather published The Christian Philosopher, the first systematic book on science published in America. Mather attempted to show how Newtonian science and religion were in harmony. It was in part based on Robert Boyle's The Christian Virtuoso (1690). Mather reportedly took inspiration from Hayy ibn Yaqdhan, by the 12th-century Islamic philosopher Abu Bakr Ibn Tufail.",
"title": "Works"
},
{
"paragraph_id": 75,
"text": "Despite condemning the \"Mahometans\" as infidels, Mather viewed the novel's protagonist, Hayy, as a model for his ideal Christian philosopher and monotheistic scientist. Mather viewed Hayy as a noble savage and applied this in the context of attempting to understand the Native American Indians, in order to convert them to Puritan Christianity. Mather's short treatise on the Lord's Supper was later translated by his cousin Josiah Cotton.",
"title": "Works"
},
{
"paragraph_id": 76,
"text": "Marvel comics features a supervillain character named Cotton Mather with alias name, 'Witch-Slayer', that is an enemy of Spider-man. He first appears in the 1972 comic 'Marvel Team-Up' issue #41, and appears in the subsequent issues until issue #45.",
"title": "In popular culture"
},
{
"paragraph_id": 77,
"text": "The rock band Cotton Mather is named after Mather.",
"title": "In popular culture"
},
{
"paragraph_id": 78,
"text": "The Handsome Family's 2006 album Last Days of Wonder is named in reference to Mather's 1693 book Wonders of the Invisible World, which lyricist Rennie Sparks found intriguing because of what she called its \"madness brimming under the surface of things.\"",
"title": "In popular culture"
},
{
"paragraph_id": 79,
"text": "Howard da Silva portrayed Mather in Burn, Witch, Burn, a December 15, 1975 episode of the CBS Radio Mystery Theater.",
"title": "In popular culture"
},
{
"paragraph_id": 80,
"text": "One of the stories in Richard Brautigan′s collection Revenge of the Lawn is called ″1692 Cotton Mather Newsreel″.",
"title": "In popular culture"
},
{
"paragraph_id": 81,
"text": "Seth Gabel portrays Cotton Mather in the TV series Salem, which aired from 2014 to 2017.",
"title": "In popular culture"
},
{
"paragraph_id": 82,
"text": "Notes",
"title": "References"
},
{
"paragraph_id": 83,
"text": "References",
"title": "References"
}
] | Cotton Mather was a Puritan clergyman and author in colonial New England, who wrote extensively on theological, historical, and scientific subjects. After being educated at Harvard College, he joined his father Increase as minister of the Congregationalist Old North Meeting House in Boston, Massachusetts, where he preached for the rest of his life. He has been referred to as the "first American Evangelical". A major intellectual and public figure in English-speaking colonial America, Cotton Mather helped lead the successful revolt of 1689 against Sir Edmund Andros, the governor imposed on New England by King James II. Mather's subsequent involvement in the Salem witch trials of 1692–1693, which he defended in the book Wonders of the Invisible World (1693), attracted intense controversy in his own day and has negatively affected his historical reputation. As a historian of colonial New England, Mather is noted for his Magnalia Christi Americana (1702). Personally and intellectually committed to the waning social and religious orders in New England, Cotton Mather unsuccessfully sought the presidency of Harvard College. After 1702, Cotton Mather clashed with Joseph Dudley, the governor of the Province of Massachusetts Bay, whom Mather attempted unsuccessfully to drive out of power. Mather championed the new Yale College as an intellectual bulwark of Puritanism in New England. He corresponded extensively with European intellectuals and received an honorary Doctor of Divinity degree from the University of Glasgow in 1710. A promoter of the new experimental science in America, Cotton Mather carried out original research on plant hybridization. He also researched the variolation method of inoculation as a means of preventing smallpox contagion, which he learned about from an African-American slave that he owned, Onesimus. He dispatched many reports on scientific matters to the Royal Society of London, which elected him as a fellow in 1713. Mather's promotion of inoculation against smallpox caused violent controversy in Boston during the outbreak of 1721. Scientist and US founding father Benjamin Franklin, who as a young Bostonian had opposed the old Puritan order represented by Mather and participated in the anti-inoculation campaign, later described Mather's book Bonifacius, or Essays to Do Good (1710) as a major influence on his life. | 2001-11-13T23:50:08Z | 2023-12-23T21:16:47Z | [
"Template:Infobox person",
"Template:Cite journal",
"Template:Wikiquote",
"Template:Salem",
"Template:Cite web",
"Template:Post-nominals",
"Template:Vague",
"Template:Notelist",
"Template:Librivox author",
"Template:Cite news",
"Template:Use mdy dates",
"Template:Full citation needed",
"Template:By whom",
"Template:Portal",
"Template:Sfn",
"Template:Commons category",
"Template:IPAc-en",
"Template:Blockquote",
"Template:Gutenberg author",
"Template:Citation needed",
"Template:Reflist",
"Template:Cite magazine",
"Template:Internet Archive author",
"Template:Authority control",
"Template:Short description",
"Template:Main",
"Template:Cite book",
"Template:Wikisource author"
] | https://en.wikipedia.org/wiki/Cotton_Mather |
7,105 | Cordwainer Smith | Paul Myron Anthony Linebarger (July 11, 1913 – August 6, 1966), better known by his pen-name Cordwainer Smith, was an American author known for his science fiction works. Linebarger was a US Army officer, a noted East Asia scholar, and an expert in psychological warfare. Although his career as a writer was shortened by his death at the age of 53, he is considered one of science fiction's more talented and influential authors.
Linebarger's father, Paul Myron Wentworth Linebarger, was a lawyer, working as a judge in the Philippines. There he met Chinese nationalist Sun Yat-sen to whom he became an advisor. Linebarger's father sent his wife to give birth in Milwaukee, Wisconsin so that their child would be eligible to become president of the United States. Sun Yat-sen, who was considered the father of Chinese nationalism, became Linebarger's godfather.
His young life was unsettled as his father moved the family to a succession of places in Asia, Europe, and the United States. He was sometimes sent to boarding schools for safety. In all, Linebarger attended more than 30 schools. In 1919, while at a boarding school in Hawaii, he was blinded in his right eye and it was replaced by a glass eye. The vision in his remaining eye was impaired by infection.
Linebarger was familiar with English, German, and Chinese by adulthood. At the age of 23, he received a PhD in political science from Johns Hopkins University.
From 1937 to 1946, Linebarger held a faculty appointment at Duke University, where he began producing highly regarded works on Far Eastern affairs.
While retaining his professorship at Duke after the beginning of World War II, Linebarger began serving as a second lieutenant of the United States Army, where he was involved in the creation of the Office of War Information and the Operation Planning and Intelligence Board. He also helped organize the army's first psychological warfare section. In 1943, he was sent to China to coordinate military intelligence operations. When he later pursued his interest in China, Linebarger became a close confidant of Chiang Kai-shek. By the end of the war, he had risen to the rank of major.
In 1947, Linebarger moved to the Johns Hopkins University's School of Advanced International Studies in Washington, DC, where he served as Professor of Asiatic Studies. He used his experiences in the war to write the book Psychological Warfare (1948), regarded by many in the field as a classic text.
He eventually rose to the rank of colonel in the reserves. He was recalled to advise the British forces in the Malayan Emergency and the U.S. Eighth Army in the Korean War. While he was known to call himself a "visitor to small wars", he refrained from becoming involved in the Vietnam War, but is known to have done work for the Central Intelligence Agency. In 1969 CIA officer Miles Copeland Jr. wrote that Linebarger was "perhaps the leading practitioner of 'black' and 'gray' propaganda in the Western world". According to Joseph Burkholder Smith, a former CIA operative, he conducted classes in psychological warfare for CIA agents at his home in Washington under cover of his position at the School of Advanced International Studies. He traveled extensively and became a member of the Foreign Policy Association, and was called upon to advise President John F. Kennedy.
In 1936, Linebarger married Margaret Snow. They had a daughter in 1942 and another in 1947. They divorced in 1949.
In 1950, Linebarger married again to Genevieve Collins; they had no children. They remained married until his death from a heart attack in 1966, at Johns Hopkins University Medical Center in Baltimore, Maryland, at age 53. Linebarger had expressed a wish to retire to Australia, which he had visited in his travels. He is buried in Arlington National Cemetery, Section 35, Grave Number 4712. His widow, Genevieve Collins Linebarger, was interred with him on November 16, 1981.
Linebarger is long rumored to have been "Kirk Allen", the fantasy-haunted subject of "The Jet-Propelled Couch," a chapter in psychologist Robert M. Lindner's best-selling 1954 collection The Fifty-Minute Hour. According to Cordwainer Smith scholar Alan C. Elms, this speculation first reached print in Brian Aldiss's 1973 history of science fiction, Billion Year Spree; Aldiss, in turn, claimed to have received the information from science fiction fan and scholar Leon Stover. More recently, both Elms and librarian Lee Weinstein have gathered circumstantial evidence to support the case for Linebarger's being Allen, but both concede there is no direct proof that Linebarger was ever a patient of Lindner's or that he suffered from a disorder similar to that of Kirk Allen.
According to Frederik Pohl:
In his stories, which were a wonderful and inimitable blend of a strange, raucous poetry and a detailed technological scene, we begin to read of human beings in worlds so far from our own in space in time that they were no longer quite Earth (even when they were the third planet out from Sol), and the people were no longer quite human, but something perhaps better, certainly different.
Linebarger's identity as "Cordwainer Smith" was secret until his death. ("Cordwainer" is an archaic word for "a worker in cordwain or cordovan leather; a shoemaker", and a "smith" is "one who works in iron or other metals; esp. a blacksmith or farrier": two kinds of skilled workers with traditional materials.) Linebarger also employed the literary pseudonyms "Carmichael Smith" (for his political thriller Atomsk), "Anthony Bearden" (for his poetry) and "Felix C. Forrest" (for the novels Ria and Carola).
Some of Smith's stories are written in narrative styles closer to traditional Chinese stories than to most English-language fiction, as well as reminiscent of the Genji tales of Lady Murasaki. The total volume of his science fiction output is relatively small, because of his time-consuming profession and his early death.
Smith's works consist of one novel, originally published in two volumes in edited form as The Planet Buyer, also known as The Boy Who Bought Old Earth (1964) and The Underpeople (1968), and later restored to its original form as Norstrilia (1975); and 32 short stories (collected in The Rediscovery of Man (1993), including two versions of the short story "War No. 81-Q").
Linebarger's cultural links to China are partially expressed in the pseudonym "Felix C. Forrest", which he used in addition to "Cordwainer Smith". His godfather Sun Yat-Sen suggested to Linebarger that he adopt the Chinese name "Lin Bai-lo" (simplified Chinese: 林白乐; traditional Chinese: 林白樂; pinyin: Lín Báilè), which may be roughly translated as "Forest of Incandescent Bliss"; "Felix" is Latin for "happy". In his later years, Linebarger proudly wore a tie with the Chinese characters for this name embroidered on it.
As an expert in psychological warfare, Linebarger was very interested in the newly developing fields of psychology and psychiatry. He used many of their concepts in his fiction. His fiction often has religious overtones or motifs, particularly evident in characters who have no control over their actions. James B. Jordan argued for the importance of Anglicanism to Smith's works back to 1949. But Linebarger's daughter Rosana Hart has indicated that he did not become an Anglican until 1950, and was not strongly interested in religion until later still. The introduction to the collection Rediscovery of Man notes that from around 1960 Linebarger became more devout and expressed this in his writing. Linebarger's works are sometimes included in analyses of Christianity in fiction, along with the works of authors such as C. S. Lewis and J.R.R. Tolkien.
Most of Smith's stories are set in the far future, between 4,000 and 14,000 years from now. After the Ancient Wars devastate Earth, humans, ruled by the Instrumentality of Mankind, rebuild and expand to the stars in the Second Age of Space around 6000 AD. Over the next few thousand years, mankind spreads to thousands of worlds and human life becomes safe but sterile, as robots and the animal-derived Underpeople take over many human jobs and humans themselves are genetically programmed as embryos for specified duties. Towards the end of this period, the Instrumentality attempts to revive old cultures and languages in a process known as the Rediscovery of Man, where humans emerge from their mundane utopia and Underpeople are freed from slavery.
For years, Linebarger had a pocket notebook which he had filled with ideas about The Instrumentality and additional stories in the series. But while in a small boat in a lake or bay in the mid 60s, he leaned over the side, and his notebook fell out of his breast pocket into the water, where it was lost forever. Another story claims that he accidentally left the notebook in a restaurant in Rhodes in 1965. With the book gone, he felt empty of ideas, and decided to start a new series which was an allegory of Mid-Eastern politics.
Smith's stories describe a long future history of Earth. The settings range from a postapocalyptic landscape with walled cities, defended by agents of the Instrumentality, to a state of sterile utopia, in which freedom can be found only deep below the surface, in long-forgotten and buried anthropogenic strata. These features may place Smith's works within the Dying Earth subgenre of science fiction. They are ultimately more optimistic and distinctive.
Smith's most celebrated short story is his first-published, "Scanners Live in Vain", which led many of its earliest readers to assume that "Cordwainer Smith" was a new pen name for one of the established giants of the genre. It was selected as one of the best science fiction short stories of the pre-Nebula Award period by the Science Fiction and Fantasy Writers of America, appearing in The Science Fiction Hall of Fame Volume One, 1929-1964. "The Ballad of Lost C'Mell" was similarly honored, appearing in The Science Fiction Hall of Fame, Volume Two.
After "Scanners Live in Vain", Smith's next story did not appear for several years, but from 1955 until his death in 1966 his stories appeared regularly, for the most part in Galaxy Science Fiction. His universe featured creations such as:
Titles marked with an asterisk * are independent stories not related to the Instrumentality universe. | [
{
"paragraph_id": 0,
"text": "Paul Myron Anthony Linebarger (July 11, 1913 – August 6, 1966), better known by his pen-name Cordwainer Smith, was an American author known for his science fiction works. Linebarger was a US Army officer, a noted East Asia scholar, and an expert in psychological warfare. Although his career as a writer was shortened by his death at the age of 53, he is considered one of science fiction's more talented and influential authors.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Linebarger's father, Paul Myron Wentworth Linebarger, was a lawyer, working as a judge in the Philippines. There he met Chinese nationalist Sun Yat-sen to whom he became an advisor. Linebarger's father sent his wife to give birth in Milwaukee, Wisconsin so that their child would be eligible to become president of the United States. Sun Yat-sen, who was considered the father of Chinese nationalism, became Linebarger's godfather.",
"title": "Early life and education"
},
{
"paragraph_id": 2,
"text": "His young life was unsettled as his father moved the family to a succession of places in Asia, Europe, and the United States. He was sometimes sent to boarding schools for safety. In all, Linebarger attended more than 30 schools. In 1919, while at a boarding school in Hawaii, he was blinded in his right eye and it was replaced by a glass eye. The vision in his remaining eye was impaired by infection.",
"title": "Early life and education"
},
{
"paragraph_id": 3,
"text": "Linebarger was familiar with English, German, and Chinese by adulthood. At the age of 23, he received a PhD in political science from Johns Hopkins University.",
"title": "Early life and education"
},
{
"paragraph_id": 4,
"text": "From 1937 to 1946, Linebarger held a faculty appointment at Duke University, where he began producing highly regarded works on Far Eastern affairs.",
"title": "Career"
},
{
"paragraph_id": 5,
"text": "While retaining his professorship at Duke after the beginning of World War II, Linebarger began serving as a second lieutenant of the United States Army, where he was involved in the creation of the Office of War Information and the Operation Planning and Intelligence Board. He also helped organize the army's first psychological warfare section. In 1943, he was sent to China to coordinate military intelligence operations. When he later pursued his interest in China, Linebarger became a close confidant of Chiang Kai-shek. By the end of the war, he had risen to the rank of major.",
"title": "Career"
},
{
"paragraph_id": 6,
"text": "In 1947, Linebarger moved to the Johns Hopkins University's School of Advanced International Studies in Washington, DC, where he served as Professor of Asiatic Studies. He used his experiences in the war to write the book Psychological Warfare (1948), regarded by many in the field as a classic text.",
"title": "Career"
},
{
"paragraph_id": 7,
"text": "He eventually rose to the rank of colonel in the reserves. He was recalled to advise the British forces in the Malayan Emergency and the U.S. Eighth Army in the Korean War. While he was known to call himself a \"visitor to small wars\", he refrained from becoming involved in the Vietnam War, but is known to have done work for the Central Intelligence Agency. In 1969 CIA officer Miles Copeland Jr. wrote that Linebarger was \"perhaps the leading practitioner of 'black' and 'gray' propaganda in the Western world\". According to Joseph Burkholder Smith, a former CIA operative, he conducted classes in psychological warfare for CIA agents at his home in Washington under cover of his position at the School of Advanced International Studies. He traveled extensively and became a member of the Foreign Policy Association, and was called upon to advise President John F. Kennedy.",
"title": "Career"
},
{
"paragraph_id": 8,
"text": "In 1936, Linebarger married Margaret Snow. They had a daughter in 1942 and another in 1947. They divorced in 1949.",
"title": "Marriage and family"
},
{
"paragraph_id": 9,
"text": "In 1950, Linebarger married again to Genevieve Collins; they had no children. They remained married until his death from a heart attack in 1966, at Johns Hopkins University Medical Center in Baltimore, Maryland, at age 53. Linebarger had expressed a wish to retire to Australia, which he had visited in his travels. He is buried in Arlington National Cemetery, Section 35, Grave Number 4712. His widow, Genevieve Collins Linebarger, was interred with him on November 16, 1981.",
"title": "Marriage and family"
},
{
"paragraph_id": 10,
"text": "Linebarger is long rumored to have been \"Kirk Allen\", the fantasy-haunted subject of \"The Jet-Propelled Couch,\" a chapter in psychologist Robert M. Lindner's best-selling 1954 collection The Fifty-Minute Hour. According to Cordwainer Smith scholar Alan C. Elms, this speculation first reached print in Brian Aldiss's 1973 history of science fiction, Billion Year Spree; Aldiss, in turn, claimed to have received the information from science fiction fan and scholar Leon Stover. More recently, both Elms and librarian Lee Weinstein have gathered circumstantial evidence to support the case for Linebarger's being Allen, but both concede there is no direct proof that Linebarger was ever a patient of Lindner's or that he suffered from a disorder similar to that of Kirk Allen.",
"title": "Case history debate"
},
{
"paragraph_id": 11,
"text": "According to Frederik Pohl:",
"title": "Science fiction style"
},
{
"paragraph_id": 12,
"text": "In his stories, which were a wonderful and inimitable blend of a strange, raucous poetry and a detailed technological scene, we begin to read of human beings in worlds so far from our own in space in time that they were no longer quite Earth (even when they were the third planet out from Sol), and the people were no longer quite human, but something perhaps better, certainly different.",
"title": "Science fiction style"
},
{
"paragraph_id": 13,
"text": "Linebarger's identity as \"Cordwainer Smith\" was secret until his death. (\"Cordwainer\" is an archaic word for \"a worker in cordwain or cordovan leather; a shoemaker\", and a \"smith\" is \"one who works in iron or other metals; esp. a blacksmith or farrier\": two kinds of skilled workers with traditional materials.) Linebarger also employed the literary pseudonyms \"Carmichael Smith\" (for his political thriller Atomsk), \"Anthony Bearden\" (for his poetry) and \"Felix C. Forrest\" (for the novels Ria and Carola).",
"title": "Science fiction style"
},
{
"paragraph_id": 14,
"text": "Some of Smith's stories are written in narrative styles closer to traditional Chinese stories than to most English-language fiction, as well as reminiscent of the Genji tales of Lady Murasaki. The total volume of his science fiction output is relatively small, because of his time-consuming profession and his early death.",
"title": "Science fiction style"
},
{
"paragraph_id": 15,
"text": "Smith's works consist of one novel, originally published in two volumes in edited form as The Planet Buyer, also known as The Boy Who Bought Old Earth (1964) and The Underpeople (1968), and later restored to its original form as Norstrilia (1975); and 32 short stories (collected in The Rediscovery of Man (1993), including two versions of the short story \"War No. 81-Q\").",
"title": "Science fiction style"
},
{
"paragraph_id": 16,
"text": "Linebarger's cultural links to China are partially expressed in the pseudonym \"Felix C. Forrest\", which he used in addition to \"Cordwainer Smith\". His godfather Sun Yat-Sen suggested to Linebarger that he adopt the Chinese name \"Lin Bai-lo\" (simplified Chinese: 林白乐; traditional Chinese: 林白樂; pinyin: Lín Báilè), which may be roughly translated as \"Forest of Incandescent Bliss\"; \"Felix\" is Latin for \"happy\". In his later years, Linebarger proudly wore a tie with the Chinese characters for this name embroidered on it.",
"title": "Science fiction style"
},
{
"paragraph_id": 17,
"text": "As an expert in psychological warfare, Linebarger was very interested in the newly developing fields of psychology and psychiatry. He used many of their concepts in his fiction. His fiction often has religious overtones or motifs, particularly evident in characters who have no control over their actions. James B. Jordan argued for the importance of Anglicanism to Smith's works back to 1949. But Linebarger's daughter Rosana Hart has indicated that he did not become an Anglican until 1950, and was not strongly interested in religion until later still. The introduction to the collection Rediscovery of Man notes that from around 1960 Linebarger became more devout and expressed this in his writing. Linebarger's works are sometimes included in analyses of Christianity in fiction, along with the works of authors such as C. S. Lewis and J.R.R. Tolkien.",
"title": "Science fiction style"
},
{
"paragraph_id": 18,
"text": "Most of Smith's stories are set in the far future, between 4,000 and 14,000 years from now. After the Ancient Wars devastate Earth, humans, ruled by the Instrumentality of Mankind, rebuild and expand to the stars in the Second Age of Space around 6000 AD. Over the next few thousand years, mankind spreads to thousands of worlds and human life becomes safe but sterile, as robots and the animal-derived Underpeople take over many human jobs and humans themselves are genetically programmed as embryos for specified duties. Towards the end of this period, the Instrumentality attempts to revive old cultures and languages in a process known as the Rediscovery of Man, where humans emerge from their mundane utopia and Underpeople are freed from slavery.",
"title": "Science fiction style"
},
{
"paragraph_id": 19,
"text": "For years, Linebarger had a pocket notebook which he had filled with ideas about The Instrumentality and additional stories in the series. But while in a small boat in a lake or bay in the mid 60s, he leaned over the side, and his notebook fell out of his breast pocket into the water, where it was lost forever. Another story claims that he accidentally left the notebook in a restaurant in Rhodes in 1965. With the book gone, he felt empty of ideas, and decided to start a new series which was an allegory of Mid-Eastern politics.",
"title": "Science fiction style"
},
{
"paragraph_id": 20,
"text": "Smith's stories describe a long future history of Earth. The settings range from a postapocalyptic landscape with walled cities, defended by agents of the Instrumentality, to a state of sterile utopia, in which freedom can be found only deep below the surface, in long-forgotten and buried anthropogenic strata. These features may place Smith's works within the Dying Earth subgenre of science fiction. They are ultimately more optimistic and distinctive.",
"title": "Science fiction style"
},
{
"paragraph_id": 21,
"text": "Smith's most celebrated short story is his first-published, \"Scanners Live in Vain\", which led many of its earliest readers to assume that \"Cordwainer Smith\" was a new pen name for one of the established giants of the genre. It was selected as one of the best science fiction short stories of the pre-Nebula Award period by the Science Fiction and Fantasy Writers of America, appearing in The Science Fiction Hall of Fame Volume One, 1929-1964. \"The Ballad of Lost C'Mell\" was similarly honored, appearing in The Science Fiction Hall of Fame, Volume Two.",
"title": "Science fiction style"
},
{
"paragraph_id": 22,
"text": "After \"Scanners Live in Vain\", Smith's next story did not appear for several years, but from 1955 until his death in 1966 his stories appeared regularly, for the most part in Galaxy Science Fiction. His universe featured creations such as:",
"title": "Science fiction style"
},
{
"paragraph_id": 23,
"text": "Titles marked with an asterisk * are independent stories not related to the Instrumentality universe.",
"title": "Published fiction"
}
] | Paul Myron Anthony Linebarger, better known by his pen-name Cordwainer Smith, was an American author known for his science fiction works. Linebarger was a US Army officer, a noted East Asia scholar, and an expert in psychological warfare. Although his career as a writer was shortened by his death at the age of 53, he is considered one of science fiction's more talented and influential authors. | 2001-11-14T13:30:47Z | 2023-11-28T18:40:34Z | [
"Template:Reflist",
"Template:Use mdy dates",
"Template:Cite web",
"Template:Authority control",
"Template:Isfdb name",
"Template:More citations needed section",
"Template:Not a typo",
"Template:Cite book",
"Template:Gutenberg author",
"Template:Internet Archive author",
"Template:Col-2",
"Template:R",
"Template:Col-begin",
"Template:Cite magazine",
"Template:Cite news",
"Template:Short description",
"Template:Blockquote",
"Template:Commonscat",
"Template:Librivox author",
"Template:Zh",
"Template:Official website",
"Template:LCAuth",
"Template:Col-end",
"Template:IMDb name",
"Template:FadedPage",
"Template:Infobox writer",
"Template:Cite journal"
] | https://en.wikipedia.org/wiki/Cordwainer_Smith |
7,110 | CSS (disambiguation) | maxraya999.css
CSS, or Cascading Style Sheets, is a language used to describe the style of document presentations in web development.
CSS may also refer to: | [
{
"paragraph_id": 0,
"text": "maxraya999.css",
"title": ""
},
{
"paragraph_id": 1,
"text": "CSS, or Cascading Style Sheets, is a language used to describe the style of document presentations in web development.",
"title": ""
},
{
"paragraph_id": 2,
"text": "CSS may also refer to:",
"title": ""
}
] | maxraya999.css CSS, or Cascading Style Sheets, is a language used to describe the style of document presentations in web development. CSS may also refer to: | 2001-11-14T19:08:09Z | 2023-11-21T20:42:31Z | [
"Template:Wiktionary",
"Template:TOC right",
"Template:Lang",
"Template:Disambiguation"
] | https://en.wikipedia.org/wiki/CSS_(disambiguation) |
7,118 | Churnsike Lodge | Churnsike Lodge is an early Victorian hunting lodge situated in the parish of Greystead, West Northumberland, England. Constructed in 1850 by the Charlton family, descendants of the noted Border Reivers family of the English Middle March, the lodge formed part of the extensive Hesleyside estate, located some 10 miles from Hesleyside Hall itself.
Consisting of the main house, stable block, hunting-dog kennels and gamekeepers bothy, when the property was acquired by the Chesters Estate in 1887 the 'Cairnsyke' estate consisted of several thousand acres of moorland, much of which was managed to support shooting of the formerly populous black grouse. Although much of this land has now reverted to fellside or has been otherwise managed as part of the commercial timber plantations of Kielder Forest, areas of heather moorland persist, dotted with remnants of the shooting butts. It is with reference to these fells that the 1887 sale catalogue described the estate as being the "Finest grouse moor in the Kingdom".
Historically, the Lodge was home to the Irthing Head and Kielder hounds, regionally renowned and headed by the locally famed fox hunter William Dodd. Dodd, and his hounds, are repeatedly referenced in the traditional Northumbrian ballads of James Armstrong's 'Wanny Blossoms'.
Having fallen into ruin by the 1980s, the property fell into the care of the Forestry Commission and was slated for demolition, as many properties in the area were, until being privately purchased. The former gamekeepers bothy now serves as a holiday-home. | [
{
"paragraph_id": 0,
"text": "Churnsike Lodge is an early Victorian hunting lodge situated in the parish of Greystead, West Northumberland, England. Constructed in 1850 by the Charlton family, descendants of the noted Border Reivers family of the English Middle March, the lodge formed part of the extensive Hesleyside estate, located some 10 miles from Hesleyside Hall itself.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Consisting of the main house, stable block, hunting-dog kennels and gamekeepers bothy, when the property was acquired by the Chesters Estate in 1887 the 'Cairnsyke' estate consisted of several thousand acres of moorland, much of which was managed to support shooting of the formerly populous black grouse. Although much of this land has now reverted to fellside or has been otherwise managed as part of the commercial timber plantations of Kielder Forest, areas of heather moorland persist, dotted with remnants of the shooting butts. It is with reference to these fells that the 1887 sale catalogue described the estate as being the \"Finest grouse moor in the Kingdom\".",
"title": ""
},
{
"paragraph_id": 2,
"text": "Historically, the Lodge was home to the Irthing Head and Kielder hounds, regionally renowned and headed by the locally famed fox hunter William Dodd. Dodd, and his hounds, are repeatedly referenced in the traditional Northumbrian ballads of James Armstrong's 'Wanny Blossoms'.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Having fallen into ruin by the 1980s, the property fell into the care of the Forestry Commission and was slated for demolition, as many properties in the area were, until being privately purchased. The former gamekeepers bothy now serves as a holiday-home.",
"title": ""
},
{
"paragraph_id": 4,
"text": "",
"title": "External links"
}
] | Churnsike Lodge is an early Victorian hunting lodge situated in the parish of Greystead, West Northumberland, England. Constructed in 1850 by the Charlton family, descendants of the noted Border Reivers family of the English Middle March, the lodge formed part of the extensive Hesleyside estate, located some 10 miles from Hesleyside Hall itself. Consisting of the main house, stable block, hunting-dog kennels and gamekeepers bothy, when the property was acquired by the Chesters Estate in 1887 the 'Cairnsyke' estate consisted of several thousand acres of moorland, much of which was managed to support shooting of the formerly populous black grouse. Although much of this land has now reverted to fellside or has been otherwise managed as part of the commercial timber plantations of Kielder Forest, areas of heather moorland persist, dotted with remnants of the shooting butts. It is with reference to these fells that the 1887 sale catalogue described the estate as being the "Finest grouse moor in the Kingdom". Historically, the Lodge was home to the Irthing Head and Kielder hounds, regionally renowned and headed by the locally famed fox hunter William Dodd. Dodd, and his hounds, are repeatedly referenced in the traditional Northumbrian ballads of James Armstrong's 'Wanny Blossoms'. Having fallen into ruin by the 1980s, the property fell into the care of the Forestry Commission and was slated for demolition, as many properties in the area were, until being privately purchased. The former gamekeepers bothy now serves as a holiday-home. | 2022-12-02T01:47:59Z | [
"Template:Use dmy dates",
"Template:Infobox building",
"Template:Reflist",
"Template:Commons category-inline",
"Template:Northumberland-struct-stub",
"Template:Use British English",
"Template:Cite web",
"Template:Cite book",
"Template:Authority control"
] | https://en.wikipedia.org/wiki/Churnsike_Lodge |
|
7,119 | William Kidd | William Kidd (c. 1654 – 23 May 1701), also known as Captain William Kidd or simply Captain Kidd, was a Scottish privateer. Conflicting accounts exist regarding his early life, but he was likely born in Dundee and later settled in New York City. By 1690, Kidd had become a highly successful privateer, commissioned to protect English interests in North America and the West Indies.
In 1695, Kidd received a royal commission from the Earl of Bellomont, the governor of New York, Massachusetts Bay and New Hampshire, to hunt down pirates and enemy French ships in the Indian Ocean. He received a letter of marque and set sail on a new ship, Adventure Galley, the following year. On his voyage he failed to find many targets, lost much of his crew and faced threats of mutiny. In 1698, Kidd captured his greatest prize, the 400-ton Quedagh Merchant, a ship hired by Armenian merchants and captained by an Englishman. The political climate in England had turned against him, however, and he was denounced as a pirate. Bellomont engineered Kidd's arrest upon his return to Boston and sent him to stand trial in London. He was found guilty and hanged in 1701.
Kidd was romanticized after his death and his exploits became a popular subject of pirate-themed works of fiction. The belief that he had left buried treasure contributed significantly to his legend, which inspired numerous treasure hunts in the following centuries.
Kidd was born in Dundee, Scotland prior to 15 October 1654. While claims have been made of alternate birthplaces, including Greenock and even Belfast, he said himself he came from Dundee in a testimony given by Kidd to the High Court of Admiralty in 1695. There have also been records of his baptism taking place in Dundee. A local society supported the family financially after the death of the father. The myth that his "father was thought to have been a Church of Scotland minister" has been discounted, insofar as there is no mention of the name in comprehensive Church of Scotland records for the period. Others still hold the contrary view.
As a young man, Kidd settled in New York City, which the English had taken over from the Dutch. There he befriended many prominent colonial citizens, including three governors. Some accounts suggest that he served as a seaman's apprentice on a pirate ship during this time, before beginning his more famous seagoing exploits as a privateer.
By 1689, Kidd was a member of a French–English pirate crew sailing the Caribbean under Captain Jean Fantin. During one of their voyages, Kidd and other crew members mutinied, ousting the captain and sailing to the British colony of Nevis. There they renamed the ship Blessed William, and Kidd became captain either as a result of election by the ship's crew, or by appointment of Christopher Codrington, governor of the island of Nevis.
Kidd was an experienced leader and sailor by that time, and the Blessed William became part of Codrington's small fleet assembled to defend Nevis from the French, with whom the English were at war. The governor did not pay the sailors for their defensive service, telling them instead to take their pay from the French. Kidd and his men attacked the French island of Marie-Galante, destroying its only town and looting the area, and gathering around 2,000 pounds sterling.
Later, during the War of the Grand Alliance, on commissions from the provinces of New York and Massachusetts Bay, Kidd captured an enemy privateer off the New England coast. Shortly afterwards, he was awarded £150 for successful privateering in the Caribbean. One year later, Captain Robert Culliford, a notorious pirate, stole Kidd's ship while he was ashore at Antigua in the West Indies.
In New York City, Kidd was active in financially supporting the construction of Trinity Church, New York.
On 16 May 1691, Kidd married Sarah Bradley Cox Oort, who was still in her early twenties. She had already been twice widowed and was one of the wealthiest women in New York, based on an inheritance from her first husband.
On 11 December 1695, Richard Coote, 1st Earl of Bellomont, who was governing New York, Massachusetts, and New Hampshire, asked the "trusty and well beloved Captain Kidd" to attack Thomas Tew, John Ireland, Thomas Wake, William Maze, and all others who associated themselves with pirates, along with any enemy French ships. His request had the weight of the Crown behind it, and Kidd would have been considered disloyal, carrying much social stigma, to refuse Bellomont. This request preceded the voyage that contributed to Kidd's reputation as a pirate and marked his image in history and folklore.
Four-fifths of the cost for the 1696 venture was paid by noble lords, who were among the most powerful men in England: the Earl of Orford, the Baron of Romney, the Duke of Shrewsbury, and Sir John Somers. Kidd was presented with a letter of marque, signed personally by King William III of England, which authorized him as a privateer. This letter reserved 10% of the loot for the Crown, and Henry Gilbert's The Book of Pirates suggests that the King fronted some of the money for the voyage himself. Kidd and his acquaintance Colonel Robert Livingston orchestrated the whole plan; they sought additional funding from merchant Sir Richard Blackham. Kidd also had to sell his ship Antigua to raise funds.
The new ship, Adventure Galley, was well suited to the task of catching pirates, weighing over 284 tons burthen and equipped with 34 cannon, oars, and 150 men. The oars were a key advantage, as they enabled Adventure Galley to manoeuvre in a battle when the winds had calmed and other ships were dead in the water. Kidd took pride in personally selecting the crew, choosing only those whom he deemed to be the best and most loyal officers.
As the Adventure Galley sailed down the Thames, Kidd unaccountably failed to salute a Navy yacht at Greenwich, as custom dictated. The Navy yacht then fired a shot to make him show respect, and Kidd's crew responded with an astounding display of impudence – by turning and slapping their backsides in [disdain].
Because of Kidd's refusal to salute, the Navy vessel's captain retaliated by pressing much of Kidd's crew into naval service, despite the captain's strong protests and the general exclusion of privateer crew from such action. Short-handed, Kidd sailed for New York City, capturing a French vessel en route (which was legal under the terms of his commission). To make up for the lack of officers, Kidd picked up replacement crew in New York, the vast majority of whom were known and hardened criminals, some likely former pirates.
Among Kidd's officers was quartermaster Hendrick van der Heul. The quartermaster was considered "second in command" to the captain in pirate culture of this era. It is not clear, however, if Van der Heul exercised this degree of responsibility because Kidd was authorised as a privateer. Van der Heul is notable because he might have been African or of Dutch descent. A contemporary source describes him as a "small black Man". If Van der Heul was of African ancestry, he would be considered the highest-ranking black pirate or privateer so far identified. Van der Heul later became a master's mate on a merchant vessel and was never convicted of piracy.
In September 1696, Kidd weighed anchor and set course for the Cape of Good Hope in southern Africa. A third of his crew died on the Comoros due to an outbreak of cholera, the brand-new ship developed many leaks, and he failed to find the pirates whom he expected to encounter off Madagascar.
With his ambitious enterprise failing, Kidd became desperate to cover its costs. Yet he failed to attack several ships when given a chance, including a Dutchman and a New York privateer. Both were out of bounds of his commission. The latter would have been considered out of bounds because New York was part of the territories of the Crown, and Kidd was authorised in part by the New York governor. Some of the crew deserted Kidd the next time that Adventure Galley anchored offshore. Those who decided to stay on made constant open threats of mutiny.
Kidd killed one of his own crewmen on 30 October 1697. Kidd's gunner William Moore was on deck sharpening a chisel when a Dutch ship appeared. Moore urged Kidd to attack the Dutchman, an act that would have been considered piratical, since the nation was not at war with England, but also certain to anger Dutch-born King William. Kidd refused, calling Moore a lousy dog. Moore retorted, "If I am a lousy dog, you have made me so; you have brought me to ruin and many more." Kidd reportedly dropped an ironbound bucket on Moore, fracturing his skull. Moore died the following day.
Seventeenth-century English admiralty law allowed captains great leeway in using violence against their crew, but killing was not permitted. Kidd said to his ship's surgeon that he had "good friends in England, that will bring me off for that".
Escaped prisoners told stories of being hoisted up by the arms and "drubbed" (thrashed) with a drawn cutlass by Kidd. On one occasion, crew members sacked the trading ship Mary and tortured several of its crew members while Kidd and the other captain, Thomas Parker, conversed privately in Kidd's cabin.
Kidd was declared a pirate very early in his voyage by a Royal Navy officer, to whom he had promised "thirty men or so". Kidd sailed away during the night to preserve his crew, rather than subject them to Royal Navy impressment. The letter of marque was intended to protect a privateer's crew from such impressment.
On 30 January 1698, Kidd raised French colours and took his greatest prize, the 400-ton Quedagh Merchant, an Indian ship hired by Armenian merchants. It was loaded with satins, muslins, gold, silver, and a variety of East Indian merchandise, as well as extremely valuable silks. The captain of Quedagh Merchant was an Englishman named Wright, who had purchased passes from the French East India Company promising him the protection of the French Crown.
When news of his capture of this ship reached England, however, officials classified Kidd as a pirate. Various naval commanders were ordered to "pursue and seize the said Kidd and his accomplices" for the "notorious piracies" they had committed.
Kidd kept the French sea passes of the Quedagh Merchant, as well as the vessel itself. British admiralty and vice-admiralty courts (especially in North America) previously had often winked at privateers' excesses amounting to piracy. Kidd might have hoped that the passes would provide the legal fig leaf that would allow him to keep Quedagh Merchant and her cargo. Renaming the seized merchantman as Adventure Prize, he set sail for Madagascar.
On 1 April 1698, Kidd reached Madagascar. After meeting privately with trader Tempest Rogers (who would later be accused of trading and selling Kidd's looted East India goods), he found the first pirate of his voyage, Robert Culliford (the same man who had stolen Kidd's ship at Antigua years before) and his crew aboard Mocha Frigate.
Two contradictory accounts exist of how Kidd proceeded. According to A General History of the Pyrates, published more than 25 years after the event by an author whose identity is disputed by historians, Kidd made peaceful overtures to Culliford: he "drank their Captain's health", swearing that "he was in every respect their Brother", and gave Culliford "a Present of an Anchor and some Guns". This account appears to be based on the testimony of Kidd's crewmen Joseph Palmer and Robert Bradinham at his trial.
The other version was presented by Richard Zacks in his 2002 book The Pirate Hunter: The True Story of Captain Kidd. According to Zacks, Kidd was unaware that Culliford had only about 20 crew with him, and felt ill-manned and ill-equipped to take Mocha Frigate until his two prize ships and crews arrived. He decided to leave Culliford alone until these reinforcements arrived. After Adventure Prize and Rouparelle reached port, Kidd ordered his crew to attack Culliford's Mocha Frigate. However, his crew refused to attack Culliford and threatened instead to shoot Kidd. Zacks does not refer to any source for his version of events.
Both accounts agree that most of Kidd's men abandoned him for Culliford. Only 13 remained with Adventure Galley. Deciding to return home, Kidd left the Adventure Galley behind, ordering her to be burnt because she had become worm-eaten and leaky. Before burning the ship, he salvaged every last scrap of metal, such as hinges. With the loyal remnant of his crew, he returned to the Caribbean aboard the Adventure Prize, stopping first at St. Augustine's Bay for repairs. Some of his crew later returned to North America on their own as passengers aboard Giles Shelley's ship Nassau.
The 1698 Act of Grace, which offered a royal pardon to pirates in the Indian Ocean, specifically exempted Kidd (and Henry Every) from receiving a pardon, in Kidd's case due to his association with prominent Whig statesmen. Kidd became aware both that he was wanted and that he could not make use of the Act of Grace upon his arrival in Anguilla, his first port of call since St. Augustine's Bay.
Prior to returning to New York City, Kidd knew that he was wanted as a pirate and that several English men-of-war were searching for him. Realizing that Adventure Prize was a marked vessel, he cached it in the Caribbean Sea, sold off his remaining plundered goods through pirate and fence William Burke, and continued towards New York aboard a sloop. He deposited some of his treasure on Gardiners Island, hoping to use his knowledge of its location as a bargaining tool. Kidd landed in Oyster Bay to avoid mutinous crew who had gathered in New York City. To avoid them, Kidd sailed 120 nautical miles (220 km; 140 mi) around the eastern tip of Long Island, and doubled back 90 nautical miles (170 km; 100 mi) along the Sound to Oyster Bay. He felt this was a safer passage than the highly trafficked Narrows between Staten Island and Brooklyn.
New York Governor Bellomont, also an investor, was away in Boston, Massachusetts. Aware of the accusations against Kidd, Bellomont was afraid of being implicated in piracy himself and believed that presenting Kidd to England in chains was his best chance to survive. He lured Kidd into Boston with false promises of clemency, and ordered him arrested on 6 July 1699. Kidd was placed in Stone Prison, spending most of the time in solitary confinement. His wife, Sarah, was also arrested and imprisoned.
The conditions of Kidd's imprisonment were extremely harsh, and were said to have driven him at least temporarily insane. By then, Bellomont had turned against Kidd and other pirates, writing that the inhabitants of Long Island were "a lawless and unruly people" protecting pirates who had "settled among them".
The civil government had changed and the new Tory ministry hoped to use Kidd as a tool to discredit the Whigs who had backed him, but Kidd refused to name names, naively confident his patrons would reward his loyalty by interceding on his behalf. There is speculation that he could have been spared had he talked. Finding Kidd politically useless, the Tory leaders sent him to stand trial before the High Court of Admiralty in London, for the charges of piracy on high seas and the murder of William Moore. Whilst awaiting trial, Kidd was confined in the infamous Newgate Prison, regarded even by the standards of the day as a disgusting hellhole, and was held there for almost 2 years before his trial even began.
Kidd had two lawyers to assist in his defense. However, the money that the Admiralty had set aside for his defense was misplaced until right before the trials start, and he had no legal counsel until the morning that the trial started and had time for just one brief consultation with them before it began. He was shocked to learn at his trial that he was charged with murder. He was found guilty on all charges (murder and five counts of piracy) and sentenced to death. He was hanged in a public execution on 23 May 1701, at Execution Dock, Wapping, in London. He had to be hanged twice. On the first attempt, the hangman's rope broke and Kidd survived. Although some in the crowd called for Kidd's release, claiming the breaking of the rope was a sign from God, Kidd was hanged again minutes later, and died. His body was gibbeted over the River Thames at Tilbury Point, as a warning to future would-be pirates, for three years.
Of Kidd's associates, Gabriel Loffe, Able Owens, and Hugh Parrot were also convicted of piracy. They were pardoned just prior to hanging at Execution Dock. Robert Lamley, William Jenkins and Richard Barleycorn were released.
Kidd's Whig backers were embarrassed by his trial. Far from rewarding his loyalty, they participated in the effort to convict him by depriving him of the money and information which might have provided him with some legal defence. In particular, the two sets of French passes he had kept were missing at his trial. These passes (and others dated 1700) resurfaced in the early 20th century, misfiled with other government papers in a London building. These passes confirm Kidd's version of events, and call the extent of his guilt as a pirate into question.
A broadside song, "Captain Kidd's Farewell to the Seas, or, the Famous Pirate's Lament", was printed shortly after his execution. It popularised the common belief that Kidd had confessed to the charges.
The belief that Kidd had left buried treasure contributed greatly to the growth of his legend. The 1701 broadside song "Captain Kid's Farewell to the Seas, or, the Famous Pirate's Lament" lists "Two hundred bars of gold, and rix dollars manifold, we seized uncontrolled".
It also inspired numerous treasure hunts conducted on Oak Island in Nova Scotia; in Suffolk County, Long Island in New York where Gardiner's Island is located; Charles Island in Milford, Connecticut; the Thimble Islands in Connecticut and Cockenoe Island in Westport, Connecticut.
Kidd was also alleged to have buried treasure on the Rahway River in New Jersey across the Arthur Kill from Staten Island.
Captain Kidd did bury a small cache of treasure on Gardiners Island off the eastern coast of Long Island, New York, in a spot known as Cherry Tree Field. Governor Bellomont reportedly had it found and sent to England to be used as evidence against Kidd in his trial.
Some time in the 1690s, Kidd visited Block Island where he was supplied with provisions by Mrs. Mercy (Sands) Raymond, daughter of the mariner James Sands. It was said that before he departed, Kidd asked Mrs. Raymond to hold out her apron, which he then filled with gold and jewels as payment for her hospitality. After her husband Joshua Raymond died, Mercy moved with her family to northern New London, Connecticut (later Montville), where she purchased much land. The Raymond family was said by family acquaintances to have been "enriched by the apron".
On Grand Manan in the Bay of Fundy, as early as 1875, there were searches on the west side of the island for treasure allegedly buried by Kidd during his time as a privateer. For nearly 200 years, this remote area of the island has been called "Money Cove".
In 1983, Cork Graham and Richard Knight searched for Captain Kidd's buried treasure off the Vietnamese island of Phú Quốc. Knight and Graham were caught, convicted of illegally landing on Vietnamese territory, and each assessed a $10,000 fine. They were imprisoned for 11 months until they paid the fine.
For years, people and treasure hunters tried to locate the Quedagh Merchant. It was reported on 13 December 2007 that "wreckage of a pirate ship abandoned by Captain Kidd in the 17th century has been found by divers in shallow waters off the Dominican Republic". The waters in which the ship was found were less than ten feet deep and were only 70 feet (21 m) off Catalina Island, just to the south of La Romana on the Dominican coast. The ship is believed to be "the remains of the Quedagh Merchant". Charles Beeker, the director of Academic Diving and Underwater Science Programs in Indiana University (Bloomington)'s School of Health, Physical Education, and Recreation, was one of the experts leading the Indiana University diving team. He said that it was "remarkable that the wreck has remained undiscovered all these years given its location", and that the ship had been the subject of so many prior failed searches. Captain Kidd's cannon, an artifact from the shipwreck, was added to a permanent exhibit at The Children's Museum of Indianapolis in 2011.
In May 2015, a 50-kilogram (110 lb) ingot expected to be silver was found in a wreck off the coast of Île Sainte-Marie in Madagascar by a team led by marine archaeologist Barry Clifford. It was believed to be part of Captain Kidd's treasure. Clifford gave the booty to Hery Rajaonarimampianina, President of Madagascar. But, in July 2015, a UNESCO scientific and technical advisory body reported that testing showed the ingot consisted of 95% lead, and speculated that the wreck in question was a broken part of the Sainte-Marie port constructions. | [
{
"paragraph_id": 0,
"text": "William Kidd (c. 1654 – 23 May 1701), also known as Captain William Kidd or simply Captain Kidd, was a Scottish privateer. Conflicting accounts exist regarding his early life, but he was likely born in Dundee and later settled in New York City. By 1690, Kidd had become a highly successful privateer, commissioned to protect English interests in North America and the West Indies.",
"title": ""
},
{
"paragraph_id": 1,
"text": "In 1695, Kidd received a royal commission from the Earl of Bellomont, the governor of New York, Massachusetts Bay and New Hampshire, to hunt down pirates and enemy French ships in the Indian Ocean. He received a letter of marque and set sail on a new ship, Adventure Galley, the following year. On his voyage he failed to find many targets, lost much of his crew and faced threats of mutiny. In 1698, Kidd captured his greatest prize, the 400-ton Quedagh Merchant, a ship hired by Armenian merchants and captained by an Englishman. The political climate in England had turned against him, however, and he was denounced as a pirate. Bellomont engineered Kidd's arrest upon his return to Boston and sent him to stand trial in London. He was found guilty and hanged in 1701.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Kidd was romanticized after his death and his exploits became a popular subject of pirate-themed works of fiction. The belief that he had left buried treasure contributed significantly to his legend, which inspired numerous treasure hunts in the following centuries.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Kidd was born in Dundee, Scotland prior to 15 October 1654. While claims have been made of alternate birthplaces, including Greenock and even Belfast, he said himself he came from Dundee in a testimony given by Kidd to the High Court of Admiralty in 1695. There have also been records of his baptism taking place in Dundee. A local society supported the family financially after the death of the father. The myth that his \"father was thought to have been a Church of Scotland minister\" has been discounted, insofar as there is no mention of the name in comprehensive Church of Scotland records for the period. Others still hold the contrary view.",
"title": "Life and career"
},
{
"paragraph_id": 4,
"text": "As a young man, Kidd settled in New York City, which the English had taken over from the Dutch. There he befriended many prominent colonial citizens, including three governors. Some accounts suggest that he served as a seaman's apprentice on a pirate ship during this time, before beginning his more famous seagoing exploits as a privateer.",
"title": "Life and career"
},
{
"paragraph_id": 5,
"text": "By 1689, Kidd was a member of a French–English pirate crew sailing the Caribbean under Captain Jean Fantin. During one of their voyages, Kidd and other crew members mutinied, ousting the captain and sailing to the British colony of Nevis. There they renamed the ship Blessed William, and Kidd became captain either as a result of election by the ship's crew, or by appointment of Christopher Codrington, governor of the island of Nevis.",
"title": "Life and career"
},
{
"paragraph_id": 6,
"text": "Kidd was an experienced leader and sailor by that time, and the Blessed William became part of Codrington's small fleet assembled to defend Nevis from the French, with whom the English were at war. The governor did not pay the sailors for their defensive service, telling them instead to take their pay from the French. Kidd and his men attacked the French island of Marie-Galante, destroying its only town and looting the area, and gathering around 2,000 pounds sterling.",
"title": "Life and career"
},
{
"paragraph_id": 7,
"text": "Later, during the War of the Grand Alliance, on commissions from the provinces of New York and Massachusetts Bay, Kidd captured an enemy privateer off the New England coast. Shortly afterwards, he was awarded £150 for successful privateering in the Caribbean. One year later, Captain Robert Culliford, a notorious pirate, stole Kidd's ship while he was ashore at Antigua in the West Indies.",
"title": "Life and career"
},
{
"paragraph_id": 8,
"text": "In New York City, Kidd was active in financially supporting the construction of Trinity Church, New York.",
"title": "Life and career"
},
{
"paragraph_id": 9,
"text": "On 16 May 1691, Kidd married Sarah Bradley Cox Oort, who was still in her early twenties. She had already been twice widowed and was one of the wealthiest women in New York, based on an inheritance from her first husband.",
"title": "Life and career"
},
{
"paragraph_id": 10,
"text": "On 11 December 1695, Richard Coote, 1st Earl of Bellomont, who was governing New York, Massachusetts, and New Hampshire, asked the \"trusty and well beloved Captain Kidd\" to attack Thomas Tew, John Ireland, Thomas Wake, William Maze, and all others who associated themselves with pirates, along with any enemy French ships. His request had the weight of the Crown behind it, and Kidd would have been considered disloyal, carrying much social stigma, to refuse Bellomont. This request preceded the voyage that contributed to Kidd's reputation as a pirate and marked his image in history and folklore.",
"title": "Life and career"
},
{
"paragraph_id": 11,
"text": "Four-fifths of the cost for the 1696 venture was paid by noble lords, who were among the most powerful men in England: the Earl of Orford, the Baron of Romney, the Duke of Shrewsbury, and Sir John Somers. Kidd was presented with a letter of marque, signed personally by King William III of England, which authorized him as a privateer. This letter reserved 10% of the loot for the Crown, and Henry Gilbert's The Book of Pirates suggests that the King fronted some of the money for the voyage himself. Kidd and his acquaintance Colonel Robert Livingston orchestrated the whole plan; they sought additional funding from merchant Sir Richard Blackham. Kidd also had to sell his ship Antigua to raise funds.",
"title": "Life and career"
},
{
"paragraph_id": 12,
"text": "The new ship, Adventure Galley, was well suited to the task of catching pirates, weighing over 284 tons burthen and equipped with 34 cannon, oars, and 150 men. The oars were a key advantage, as they enabled Adventure Galley to manoeuvre in a battle when the winds had calmed and other ships were dead in the water. Kidd took pride in personally selecting the crew, choosing only those whom he deemed to be the best and most loyal officers.",
"title": "Life and career"
},
{
"paragraph_id": 13,
"text": "As the Adventure Galley sailed down the Thames, Kidd unaccountably failed to salute a Navy yacht at Greenwich, as custom dictated. The Navy yacht then fired a shot to make him show respect, and Kidd's crew responded with an astounding display of impudence – by turning and slapping their backsides in [disdain].",
"title": "Life and career"
},
{
"paragraph_id": 14,
"text": "Because of Kidd's refusal to salute, the Navy vessel's captain retaliated by pressing much of Kidd's crew into naval service, despite the captain's strong protests and the general exclusion of privateer crew from such action. Short-handed, Kidd sailed for New York City, capturing a French vessel en route (which was legal under the terms of his commission). To make up for the lack of officers, Kidd picked up replacement crew in New York, the vast majority of whom were known and hardened criminals, some likely former pirates.",
"title": "Life and career"
},
{
"paragraph_id": 15,
"text": "Among Kidd's officers was quartermaster Hendrick van der Heul. The quartermaster was considered \"second in command\" to the captain in pirate culture of this era. It is not clear, however, if Van der Heul exercised this degree of responsibility because Kidd was authorised as a privateer. Van der Heul is notable because he might have been African or of Dutch descent. A contemporary source describes him as a \"small black Man\". If Van der Heul was of African ancestry, he would be considered the highest-ranking black pirate or privateer so far identified. Van der Heul later became a master's mate on a merchant vessel and was never convicted of piracy.",
"title": "Life and career"
},
{
"paragraph_id": 16,
"text": "In September 1696, Kidd weighed anchor and set course for the Cape of Good Hope in southern Africa. A third of his crew died on the Comoros due to an outbreak of cholera, the brand-new ship developed many leaks, and he failed to find the pirates whom he expected to encounter off Madagascar.",
"title": "Life and career"
},
{
"paragraph_id": 17,
"text": "With his ambitious enterprise failing, Kidd became desperate to cover its costs. Yet he failed to attack several ships when given a chance, including a Dutchman and a New York privateer. Both were out of bounds of his commission. The latter would have been considered out of bounds because New York was part of the territories of the Crown, and Kidd was authorised in part by the New York governor. Some of the crew deserted Kidd the next time that Adventure Galley anchored offshore. Those who decided to stay on made constant open threats of mutiny.",
"title": "Life and career"
},
{
"paragraph_id": 18,
"text": "Kidd killed one of his own crewmen on 30 October 1697. Kidd's gunner William Moore was on deck sharpening a chisel when a Dutch ship appeared. Moore urged Kidd to attack the Dutchman, an act that would have been considered piratical, since the nation was not at war with England, but also certain to anger Dutch-born King William. Kidd refused, calling Moore a lousy dog. Moore retorted, \"If I am a lousy dog, you have made me so; you have brought me to ruin and many more.\" Kidd reportedly dropped an ironbound bucket on Moore, fracturing his skull. Moore died the following day.",
"title": "Life and career"
},
{
"paragraph_id": 19,
"text": "Seventeenth-century English admiralty law allowed captains great leeway in using violence against their crew, but killing was not permitted. Kidd said to his ship's surgeon that he had \"good friends in England, that will bring me off for that\".",
"title": "Life and career"
},
{
"paragraph_id": 20,
"text": "Escaped prisoners told stories of being hoisted up by the arms and \"drubbed\" (thrashed) with a drawn cutlass by Kidd. On one occasion, crew members sacked the trading ship Mary and tortured several of its crew members while Kidd and the other captain, Thomas Parker, conversed privately in Kidd's cabin.",
"title": "Life and career"
},
{
"paragraph_id": 21,
"text": "Kidd was declared a pirate very early in his voyage by a Royal Navy officer, to whom he had promised \"thirty men or so\". Kidd sailed away during the night to preserve his crew, rather than subject them to Royal Navy impressment. The letter of marque was intended to protect a privateer's crew from such impressment.",
"title": "Life and career"
},
{
"paragraph_id": 22,
"text": "On 30 January 1698, Kidd raised French colours and took his greatest prize, the 400-ton Quedagh Merchant, an Indian ship hired by Armenian merchants. It was loaded with satins, muslins, gold, silver, and a variety of East Indian merchandise, as well as extremely valuable silks. The captain of Quedagh Merchant was an Englishman named Wright, who had purchased passes from the French East India Company promising him the protection of the French Crown.",
"title": "Life and career"
},
{
"paragraph_id": 23,
"text": "When news of his capture of this ship reached England, however, officials classified Kidd as a pirate. Various naval commanders were ordered to \"pursue and seize the said Kidd and his accomplices\" for the \"notorious piracies\" they had committed.",
"title": "Life and career"
},
{
"paragraph_id": 24,
"text": "Kidd kept the French sea passes of the Quedagh Merchant, as well as the vessel itself. British admiralty and vice-admiralty courts (especially in North America) previously had often winked at privateers' excesses amounting to piracy. Kidd might have hoped that the passes would provide the legal fig leaf that would allow him to keep Quedagh Merchant and her cargo. Renaming the seized merchantman as Adventure Prize, he set sail for Madagascar.",
"title": "Life and career"
},
{
"paragraph_id": 25,
"text": "On 1 April 1698, Kidd reached Madagascar. After meeting privately with trader Tempest Rogers (who would later be accused of trading and selling Kidd's looted East India goods), he found the first pirate of his voyage, Robert Culliford (the same man who had stolen Kidd's ship at Antigua years before) and his crew aboard Mocha Frigate.",
"title": "Life and career"
},
{
"paragraph_id": 26,
"text": "Two contradictory accounts exist of how Kidd proceeded. According to A General History of the Pyrates, published more than 25 years after the event by an author whose identity is disputed by historians, Kidd made peaceful overtures to Culliford: he \"drank their Captain's health\", swearing that \"he was in every respect their Brother\", and gave Culliford \"a Present of an Anchor and some Guns\". This account appears to be based on the testimony of Kidd's crewmen Joseph Palmer and Robert Bradinham at his trial.",
"title": "Life and career"
},
{
"paragraph_id": 27,
"text": "The other version was presented by Richard Zacks in his 2002 book The Pirate Hunter: The True Story of Captain Kidd. According to Zacks, Kidd was unaware that Culliford had only about 20 crew with him, and felt ill-manned and ill-equipped to take Mocha Frigate until his two prize ships and crews arrived. He decided to leave Culliford alone until these reinforcements arrived. After Adventure Prize and Rouparelle reached port, Kidd ordered his crew to attack Culliford's Mocha Frigate. However, his crew refused to attack Culliford and threatened instead to shoot Kidd. Zacks does not refer to any source for his version of events.",
"title": "Life and career"
},
{
"paragraph_id": 28,
"text": "Both accounts agree that most of Kidd's men abandoned him for Culliford. Only 13 remained with Adventure Galley. Deciding to return home, Kidd left the Adventure Galley behind, ordering her to be burnt because she had become worm-eaten and leaky. Before burning the ship, he salvaged every last scrap of metal, such as hinges. With the loyal remnant of his crew, he returned to the Caribbean aboard the Adventure Prize, stopping first at St. Augustine's Bay for repairs. Some of his crew later returned to North America on their own as passengers aboard Giles Shelley's ship Nassau.",
"title": "Life and career"
},
{
"paragraph_id": 29,
"text": "The 1698 Act of Grace, which offered a royal pardon to pirates in the Indian Ocean, specifically exempted Kidd (and Henry Every) from receiving a pardon, in Kidd's case due to his association with prominent Whig statesmen. Kidd became aware both that he was wanted and that he could not make use of the Act of Grace upon his arrival in Anguilla, his first port of call since St. Augustine's Bay.",
"title": "Life and career"
},
{
"paragraph_id": 30,
"text": "Prior to returning to New York City, Kidd knew that he was wanted as a pirate and that several English men-of-war were searching for him. Realizing that Adventure Prize was a marked vessel, he cached it in the Caribbean Sea, sold off his remaining plundered goods through pirate and fence William Burke, and continued towards New York aboard a sloop. He deposited some of his treasure on Gardiners Island, hoping to use his knowledge of its location as a bargaining tool. Kidd landed in Oyster Bay to avoid mutinous crew who had gathered in New York City. To avoid them, Kidd sailed 120 nautical miles (220 km; 140 mi) around the eastern tip of Long Island, and doubled back 90 nautical miles (170 km; 100 mi) along the Sound to Oyster Bay. He felt this was a safer passage than the highly trafficked Narrows between Staten Island and Brooklyn.",
"title": "Life and career"
},
{
"paragraph_id": 31,
"text": "New York Governor Bellomont, also an investor, was away in Boston, Massachusetts. Aware of the accusations against Kidd, Bellomont was afraid of being implicated in piracy himself and believed that presenting Kidd to England in chains was his best chance to survive. He lured Kidd into Boston with false promises of clemency, and ordered him arrested on 6 July 1699. Kidd was placed in Stone Prison, spending most of the time in solitary confinement. His wife, Sarah, was also arrested and imprisoned.",
"title": "Life and career"
},
{
"paragraph_id": 32,
"text": "The conditions of Kidd's imprisonment were extremely harsh, and were said to have driven him at least temporarily insane. By then, Bellomont had turned against Kidd and other pirates, writing that the inhabitants of Long Island were \"a lawless and unruly people\" protecting pirates who had \"settled among them\".",
"title": "Life and career"
},
{
"paragraph_id": 33,
"text": "The civil government had changed and the new Tory ministry hoped to use Kidd as a tool to discredit the Whigs who had backed him, but Kidd refused to name names, naively confident his patrons would reward his loyalty by interceding on his behalf. There is speculation that he could have been spared had he talked. Finding Kidd politically useless, the Tory leaders sent him to stand trial before the High Court of Admiralty in London, for the charges of piracy on high seas and the murder of William Moore. Whilst awaiting trial, Kidd was confined in the infamous Newgate Prison, regarded even by the standards of the day as a disgusting hellhole, and was held there for almost 2 years before his trial even began.",
"title": "Life and career"
},
{
"paragraph_id": 34,
"text": "Kidd had two lawyers to assist in his defense. However, the money that the Admiralty had set aside for his defense was misplaced until right before the trials start, and he had no legal counsel until the morning that the trial started and had time for just one brief consultation with them before it began. He was shocked to learn at his trial that he was charged with murder. He was found guilty on all charges (murder and five counts of piracy) and sentenced to death. He was hanged in a public execution on 23 May 1701, at Execution Dock, Wapping, in London. He had to be hanged twice. On the first attempt, the hangman's rope broke and Kidd survived. Although some in the crowd called for Kidd's release, claiming the breaking of the rope was a sign from God, Kidd was hanged again minutes later, and died. His body was gibbeted over the River Thames at Tilbury Point, as a warning to future would-be pirates, for three years.",
"title": "Life and career"
},
{
"paragraph_id": 35,
"text": "Of Kidd's associates, Gabriel Loffe, Able Owens, and Hugh Parrot were also convicted of piracy. They were pardoned just prior to hanging at Execution Dock. Robert Lamley, William Jenkins and Richard Barleycorn were released.",
"title": "Life and career"
},
{
"paragraph_id": 36,
"text": "Kidd's Whig backers were embarrassed by his trial. Far from rewarding his loyalty, they participated in the effort to convict him by depriving him of the money and information which might have provided him with some legal defence. In particular, the two sets of French passes he had kept were missing at his trial. These passes (and others dated 1700) resurfaced in the early 20th century, misfiled with other government papers in a London building. These passes confirm Kidd's version of events, and call the extent of his guilt as a pirate into question.",
"title": "Life and career"
},
{
"paragraph_id": 37,
"text": "A broadside song, \"Captain Kidd's Farewell to the Seas, or, the Famous Pirate's Lament\", was printed shortly after his execution. It popularised the common belief that Kidd had confessed to the charges.",
"title": "Life and career"
},
{
"paragraph_id": 38,
"text": "The belief that Kidd had left buried treasure contributed greatly to the growth of his legend. The 1701 broadside song \"Captain Kid's Farewell to the Seas, or, the Famous Pirate's Lament\" lists \"Two hundred bars of gold, and rix dollars manifold, we seized uncontrolled\".",
"title": "Mythology and legend"
},
{
"paragraph_id": 39,
"text": "It also inspired numerous treasure hunts conducted on Oak Island in Nova Scotia; in Suffolk County, Long Island in New York where Gardiner's Island is located; Charles Island in Milford, Connecticut; the Thimble Islands in Connecticut and Cockenoe Island in Westport, Connecticut.",
"title": "Mythology and legend"
},
{
"paragraph_id": 40,
"text": "Kidd was also alleged to have buried treasure on the Rahway River in New Jersey across the Arthur Kill from Staten Island.",
"title": "Mythology and legend"
},
{
"paragraph_id": 41,
"text": "Captain Kidd did bury a small cache of treasure on Gardiners Island off the eastern coast of Long Island, New York, in a spot known as Cherry Tree Field. Governor Bellomont reportedly had it found and sent to England to be used as evidence against Kidd in his trial.",
"title": "Mythology and legend"
},
{
"paragraph_id": 42,
"text": "Some time in the 1690s, Kidd visited Block Island where he was supplied with provisions by Mrs. Mercy (Sands) Raymond, daughter of the mariner James Sands. It was said that before he departed, Kidd asked Mrs. Raymond to hold out her apron, which he then filled with gold and jewels as payment for her hospitality. After her husband Joshua Raymond died, Mercy moved with her family to northern New London, Connecticut (later Montville), where she purchased much land. The Raymond family was said by family acquaintances to have been \"enriched by the apron\".",
"title": "Mythology and legend"
},
{
"paragraph_id": 43,
"text": "On Grand Manan in the Bay of Fundy, as early as 1875, there were searches on the west side of the island for treasure allegedly buried by Kidd during his time as a privateer. For nearly 200 years, this remote area of the island has been called \"Money Cove\".",
"title": "Mythology and legend"
},
{
"paragraph_id": 44,
"text": "In 1983, Cork Graham and Richard Knight searched for Captain Kidd's buried treasure off the Vietnamese island of Phú Quốc. Knight and Graham were caught, convicted of illegally landing on Vietnamese territory, and each assessed a $10,000 fine. They were imprisoned for 11 months until they paid the fine.",
"title": "Mythology and legend"
},
{
"paragraph_id": 45,
"text": "For years, people and treasure hunters tried to locate the Quedagh Merchant. It was reported on 13 December 2007 that \"wreckage of a pirate ship abandoned by Captain Kidd in the 17th century has been found by divers in shallow waters off the Dominican Republic\". The waters in which the ship was found were less than ten feet deep and were only 70 feet (21 m) off Catalina Island, just to the south of La Romana on the Dominican coast. The ship is believed to be \"the remains of the Quedagh Merchant\". Charles Beeker, the director of Academic Diving and Underwater Science Programs in Indiana University (Bloomington)'s School of Health, Physical Education, and Recreation, was one of the experts leading the Indiana University diving team. He said that it was \"remarkable that the wreck has remained undiscovered all these years given its location\", and that the ship had been the subject of so many prior failed searches. Captain Kidd's cannon, an artifact from the shipwreck, was added to a permanent exhibit at The Children's Museum of Indianapolis in 2011.",
"title": "Quedagh Merchant found"
},
{
"paragraph_id": 46,
"text": "In May 2015, a 50-kilogram (110 lb) ingot expected to be silver was found in a wreck off the coast of Île Sainte-Marie in Madagascar by a team led by marine archaeologist Barry Clifford. It was believed to be part of Captain Kidd's treasure. Clifford gave the booty to Hery Rajaonarimampianina, President of Madagascar. But, in July 2015, a UNESCO scientific and technical advisory body reported that testing showed the ingot consisted of 95% lead, and speculated that the wreck in question was a broken part of the Sainte-Marie port constructions.",
"title": "False find"
}
] | William Kidd, also known as Captain William Kidd or simply Captain Kidd, was a Scottish privateer. Conflicting accounts exist regarding his early life, but he was likely born in Dundee and later settled in New York City. By 1690, Kidd had become a highly successful privateer, commissioned to protect English interests in North America and the West Indies. In 1695, Kidd received a royal commission from the Earl of Bellomont, the governor of New York, Massachusetts Bay and New Hampshire, to hunt down pirates and enemy French ships in the Indian Ocean. He received a letter of marque and set sail on a new ship, Adventure Galley, the following year. On his voyage he failed to find many targets, lost much of his crew and faced threats of mutiny. In 1698, Kidd captured his greatest prize, the 400-ton Quedagh Merchant, a ship hired by Armenian merchants and captained by an Englishman. The political climate in England had turned against him, however, and he was denounced as a pirate. Bellomont engineered Kidd's arrest upon his return to Boston and sent him to stand trial in London. He was found guilty and hanged in 1701. Kidd was romanticized after his death and his exploits became a popular subject of pirate-themed works of fiction. The belief that he had left buried treasure contributed significantly to his legend, which inspired numerous treasure hunts in the following centuries. | 2001-11-15T16:23:30Z | 2023-12-31T17:48:53Z | [
"Template:About",
"Template:Redirect",
"Template:Convert",
"Template:Cite news",
"Template:Cite EB1911",
"Template:Div col end",
"Template:Better source needed",
"Template:ISBN",
"Template:Cite book",
"Template:Refbegin",
"Template:Full citation needed",
"Template:Appletons' Poster",
"Template:Authority control",
"Template:Use dmy dates",
"Template:Reflist",
"Template:Cite web",
"Template:Use British English",
"Template:Circa",
"Template:Div col",
"Template:Blockquote",
"Template:Columns-list",
"Template:Cite journal",
"Template:Pirates",
"Template:Short description",
"Template:Cbignore",
"Template:Webarchive",
"Template:Pirates of the Modern Age",
"Template:Infobox pirate",
"Template:Page needed",
"Template:Fact",
"Template:Cite magazine",
"Template:Citation",
"Template:Refend"
] | https://en.wikipedia.org/wiki/William_Kidd |
7,120 | Calreticulin | Calreticulin also known as calregulin, CRP55, CaBP3, calsequestrin-like protein, and endoplasmic reticulum resident protein 60 (ERp60) is a protein that in humans is encoded by the CALR gene.
Calreticulin is a multifunctional soluble protein that binds Ca ions (a second messenger in signal transduction), rendering it inactive. The Ca is bound with low affinity, but high capacity, and can be released on a signal (see inositol trisphosphate). Calreticulin is located in storage compartments associated with the endoplasmic reticulum and is considered an ER resident protein.
The term "Mobilferrin" is considered to be the same as calreticulin by some sources.
Calreticulin binds to misfolded proteins and prevents them from being exported from the endoplasmic reticulum to the Golgi apparatus.
A similar quality-control molecular chaperone, calnexin, performs the same service for soluble proteins as does calreticulin, however it is a membrane-bound protein. Both proteins, calnexin and calreticulin, have the function of binding to oligosaccharides containing terminal glucose residues, thereby targeting them for degradation. Calreticulin and Calnexin's ability to bind carbohydrates associates them with the lectin protein family. In normal cellular function, trimming of glucose residues off the core oligosaccharide added during N-linked glycosylation is a part of protein processing. If "overseer" enzymes note that residues are misfolded, proteins within the rER will re-add glucose residues so that other calreticulin/calnexin can bind to these proteins and prevent them from proceeding to the Golgi. This leads these aberrantly folded proteins down a path whereby they are targeted for degradation.
Studies on transgenic mice reveal that calreticulin is a cardiac embryonic gene that is essential during development.
Calreticulin and calnexin are also integral in the production of MHC class I proteins. As newly synthesized MHC class I α-chains enter the endoplasmic reticulum, calnexin binds on to them retaining them in a partly folded state. After the β2-microglobulin binds to the peptide-loading complex (PLC), calreticulin (along with ERp57) takes over the job of chaperoning the MHC class I protein while the tapasin links the complex to the transporter associated with antigen processing (TAP) complex. This association prepares the MHC class I to bind an antigen for presentation on the cell surface.
Calreticulin is also found in the nucleus, suggesting that it may have a role in transcription regulation. Calreticulin binds to the synthetic peptide KLGFFKR, which is almost identical to an amino acid sequence in the DNA-binding domain of the superfamily of nuclear receptors. The amino terminus of calreticulin interacts with the DNA-binding domain of the glucocorticoid receptor and prevents the receptor from binding to its specific glucocorticoid response element. Calreticulin can inhibit the binding of androgen receptor to its hormone-responsive DNA element and can inhibit androgen receptor and retinoic acid receptor transcriptional activities in vivo, as well as retinoic acid-induced neuronal differentiation. Thus, calreticulin can act as an important modulator of the regulation of gene transcription by nuclear hormone receptors.
Calreticulin binds to antibodies in certain area of systemic lupus and Sjögren patients that contain anti-Ro/SSA antibodies. Systemic lupus erythematosus is associated with increased autoantibody titers against calreticulin, but calreticulin is not a Ro/SS-A antigen. Earlier papers referred to calreticulin as an Ro/SS-A antigen, but this was later disproven. Increased autoantibody titer against human calreticulin is found in infants with complete congenital heart block of both the IgG and IgM classes.
In 2013, two groups detected calreticulin mutations in a majority of JAK2-negative/MPL-negative patients with essential thrombocythemia and primary myelofibrosis, which makes CALR mutations the second most common in myeloproliferative neoplasms. All mutations (insertions or deletions) affected the last exon, generating a reading frame shift of the resulting protein, that creates a novel terminal peptide and causes a loss of endoplasmic reticulum KDEL retention signal.
Calreticulin (CRT) is expressed in many cancer cells and plays a role to promote macrophages to engulf hazardous cancerous cells. The reason why most of the cells are not destroyed is the presence of another molecule with signal CD47, which blocks CRT. Hence antibodies that block CD47 might be useful as a cancer treatment. In mice models of myeloid leukemia and non-Hodgkin lymphoma, anti-CD47 were effective in clearing cancer cells while normal cells were unaffected.
Calreticulin has been shown to interact with Perforin and NK2 homeobox 1. | [
{
"paragraph_id": 0,
"text": "Calreticulin also known as calregulin, CRP55, CaBP3, calsequestrin-like protein, and endoplasmic reticulum resident protein 60 (ERp60) is a protein that in humans is encoded by the CALR gene.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Calreticulin is a multifunctional soluble protein that binds Ca ions (a second messenger in signal transduction), rendering it inactive. The Ca is bound with low affinity, but high capacity, and can be released on a signal (see inositol trisphosphate). Calreticulin is located in storage compartments associated with the endoplasmic reticulum and is considered an ER resident protein.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The term \"Mobilferrin\" is considered to be the same as calreticulin by some sources.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Calreticulin binds to misfolded proteins and prevents them from being exported from the endoplasmic reticulum to the Golgi apparatus.",
"title": "Function"
},
{
"paragraph_id": 4,
"text": "A similar quality-control molecular chaperone, calnexin, performs the same service for soluble proteins as does calreticulin, however it is a membrane-bound protein. Both proteins, calnexin and calreticulin, have the function of binding to oligosaccharides containing terminal glucose residues, thereby targeting them for degradation. Calreticulin and Calnexin's ability to bind carbohydrates associates them with the lectin protein family. In normal cellular function, trimming of glucose residues off the core oligosaccharide added during N-linked glycosylation is a part of protein processing. If \"overseer\" enzymes note that residues are misfolded, proteins within the rER will re-add glucose residues so that other calreticulin/calnexin can bind to these proteins and prevent them from proceeding to the Golgi. This leads these aberrantly folded proteins down a path whereby they are targeted for degradation.",
"title": "Function"
},
{
"paragraph_id": 5,
"text": "Studies on transgenic mice reveal that calreticulin is a cardiac embryonic gene that is essential during development.",
"title": "Function"
},
{
"paragraph_id": 6,
"text": "Calreticulin and calnexin are also integral in the production of MHC class I proteins. As newly synthesized MHC class I α-chains enter the endoplasmic reticulum, calnexin binds on to them retaining them in a partly folded state. After the β2-microglobulin binds to the peptide-loading complex (PLC), calreticulin (along with ERp57) takes over the job of chaperoning the MHC class I protein while the tapasin links the complex to the transporter associated with antigen processing (TAP) complex. This association prepares the MHC class I to bind an antigen for presentation on the cell surface.",
"title": "Function"
},
{
"paragraph_id": 7,
"text": "Calreticulin is also found in the nucleus, suggesting that it may have a role in transcription regulation. Calreticulin binds to the synthetic peptide KLGFFKR, which is almost identical to an amino acid sequence in the DNA-binding domain of the superfamily of nuclear receptors. The amino terminus of calreticulin interacts with the DNA-binding domain of the glucocorticoid receptor and prevents the receptor from binding to its specific glucocorticoid response element. Calreticulin can inhibit the binding of androgen receptor to its hormone-responsive DNA element and can inhibit androgen receptor and retinoic acid receptor transcriptional activities in vivo, as well as retinoic acid-induced neuronal differentiation. Thus, calreticulin can act as an important modulator of the regulation of gene transcription by nuclear hormone receptors.",
"title": "Function"
},
{
"paragraph_id": 8,
"text": "Calreticulin binds to antibodies in certain area of systemic lupus and Sjögren patients that contain anti-Ro/SSA antibodies. Systemic lupus erythematosus is associated with increased autoantibody titers against calreticulin, but calreticulin is not a Ro/SS-A antigen. Earlier papers referred to calreticulin as an Ro/SS-A antigen, but this was later disproven. Increased autoantibody titer against human calreticulin is found in infants with complete congenital heart block of both the IgG and IgM classes.",
"title": "Clinical significance"
},
{
"paragraph_id": 9,
"text": "In 2013, two groups detected calreticulin mutations in a majority of JAK2-negative/MPL-negative patients with essential thrombocythemia and primary myelofibrosis, which makes CALR mutations the second most common in myeloproliferative neoplasms. All mutations (insertions or deletions) affected the last exon, generating a reading frame shift of the resulting protein, that creates a novel terminal peptide and causes a loss of endoplasmic reticulum KDEL retention signal.",
"title": "Clinical significance"
},
{
"paragraph_id": 10,
"text": "Calreticulin (CRT) is expressed in many cancer cells and plays a role to promote macrophages to engulf hazardous cancerous cells. The reason why most of the cells are not destroyed is the presence of another molecule with signal CD47, which blocks CRT. Hence antibodies that block CD47 might be useful as a cancer treatment. In mice models of myeloid leukemia and non-Hodgkin lymphoma, anti-CD47 were effective in clearing cancer cells while normal cells were unaffected.",
"title": "Role in cancer"
},
{
"paragraph_id": 11,
"text": "Calreticulin has been shown to interact with Perforin and NK2 homeobox 1.",
"title": "Interactions"
}
] | Calreticulin also known as calregulin, CRP55, CaBP3, calsequestrin-like protein, and endoplasmic reticulum resident protein 60 (ERp60) is a protein that in humans is encoded by the CALR gene. Calreticulin is a multifunctional soluble protein that binds Ca2+ ions (a second messenger in signal transduction), rendering it inactive. The Ca2+ is bound with low affinity, but high capacity, and can be released on a signal (see inositol trisphosphate). Calreticulin is located in storage compartments associated with the endoplasmic reticulum and is considered an ER resident protein. The term "Mobilferrin" is considered to be the same as calreticulin by some sources. | 2001-11-15T18:16:39Z | 2023-12-19T13:08:21Z | [
"Template:Distinguish",
"Template:Reflist",
"Template:Cite web",
"Template:Refbegin",
"Template:Antiangiogenics",
"Template:PDB Gallery",
"Template:Short description",
"Template:Clear",
"Template:Lectins",
"Template:Infobox gene",
"Template:Refend",
"Template:Cite journal",
"Template:Cite book",
"Template:Calcium signaling",
"Template:MeSH name"
] | https://en.wikipedia.org/wiki/Calreticulin |
7,122 | Crannog | A crannog (/ˈkrænəɡ/; Irish: crannóg [ˈkɾˠan̪ˠoːɡ]; Scottish Gaelic: crannag [ˈkʰɾan̪ˠak]) is typically a partially or entirely artificial island, usually built in lakes and estuarine waters of Scotland, Wales, and Ireland. Unlike the prehistoric pile dwellings around the Alps, which were built on the shores and not inundated until later, crannogs were built in the water, thus forming artificial islands.
Crannogs were used as dwellings over five millennia, from the European Neolithic Period to as late as the 17th/early 18th century. In Scotland there is no convincing evidence in the archaeological record of Early and Middle Bronze Age or Norse Period use. The radiocarbon dating obtained from key sites such as Oakbank and Redcastle indicates at a 95.4 per cent confidence level that they date to the Late Bronze Age to Early Iron Age. The date ranges fall after around 800 BC and so could be considered Late Bronze Age by only the narrowest of margins.
Crannogs have been variously interpreted as free-standing wooden structures, as at Loch Tay, although more commonly they are composed of brush, stone or timber mounds that can be revetted with timber piles. However, in areas such as the Outer Hebrides of Scotland, timber was unavailable from the Neolithic era onwards. As a result, crannogs made completely of stone and supporting drystone architecture are common there. Today, crannogs typically appear as small, circular islets, often 10 to 30 metres (30 to 100 ft) in diameter, covered in dense vegetation due to their inaccessibility to grazing livestock.
The Irish word crannóg derives from Old Irish crannóc, which referred to a wooden structure or vessel, stemming from crann, which means "tree", suffixed with "-óg" which is a diminutive ending ultimately borrowed from Welsh. The suffix -óg is sometimes misunderstood by non-native Irish-speakers as óg, which is a separate word that means "young". This misunderstanding leads to a folk etymology whereby crannóg is misanalysed as crann óg, which is pronounced differently and means "a young tree". The modern sense of the term first appears sometime around the 12th century; its popularity spread in the medieval period along with the terms isle, ylle, inis, eilean or oileán.
There is some confusion on what the term crannog originally referred to, as the structure atop the island or the island itself. The additional meanings of Irish crannóg can be variously related as 'structure/piece of wood', including 'crow's nest', 'pulpit', or 'driver's box on a coach'; 'vessel/box/chest' more generally; and 'wooden pin'. The Scottish Gaelic form is crannag and has the additional meanings of 'pulpit' and 'churn'. Thus, there is no real consensus on what the term crannog actually implies, although the modern adoption in the English language broadly refers to a partially or completely artificial islet that saw use from the prehistoric to the Post-Medieval period in Ireland and Scotland.
Crannogs are widespread in Ireland, with an estimated 1,200 examples, while Scotland has 389 sites officially listed as such. The actual number in Scotland varies considerably depending on definition—between about 350 and 500, due to the use of the term "island dun" for well over one hundred Hebridean examples—a distinction that has created a divide between mainland Scottish crannog and Hebridean islet settlement studies. Previously unknown crannogs in Scotland and Ireland are still being found as underwater surveys continue to investigate loch beds for completely submerged examples.
The largest concentrations of crannogs in Ireland are found in the Drumlin Belt of the Midlands, North and Northwest. In Scotland, crannogs are mostly found on the western coast, with high concentrations in Argyll and Dumfries and Galloway. In reality, the Western Isles contain the highest density of lake-settlements in Scotland, yet they are recognised under varying terms besides "crannog". One lone Welsh example exists at Llangorse Lake, probably a product of Irish influence.
Reconstructed Irish crannógs are located at Craggaunowen, County Clare, in the Irish National Heritage Park, County Wexford and at Castle Espie, County Down. In Scotland there are reconstructions at the "Scottish Crannog Centre" at Loch Tay, Perthshire; this centre offers guided tours and hands-on activities, including wool-spinning, wood-turning and making fire, holds events to celebrate wild cooking and crafts, and hosts yearly Midsummer, Lughnasadh and Samhain festivals.
Crannogs took on many different forms and methods of construction based on what was available in the immediate landscape. The classic image of a prehistoric crannog stems from both post-medieval illustrations and highly influential excavations, such as Milton Loch in Scotland by C. M. Piggot after World War II. The Milton Loch interpretation is of a small islet surrounded or defined at its edges by timber piles and a gangway, topped by a typical Iron Age roundhouse.
The choice of a small islet as a home may seem odd today, yet waterways were the main channels for both communication and travel until the 19th century in much of Ireland and, especially, Highland Scotland. Crannogs are traditionally interpreted as simple prehistorical farmsteads. They are also interpreted as boltholes in times of danger, as status symbols with limited access, and as inherited locations of power that imply a sense of legitimacy and ancestry towards ownership of the surrounding landscape.
A strict definition of a crannog, which has long been debated, requires the use of timber. Sites in the Western Isles do not satisfy this criterion, although their inhabitants shared the common habit of living on water. If not classed as "true" crannogs, small occupied islets (often at least partially artificial in nature) may be referred to as "island duns., But, rather confusingly, 22 islet-based sites are classified as "proper" crannogs due to the different interpretations of the inspectors or excavators who drew up field reports.
Hebridean island dwellings or crannogs were commonly built on both natural and artificial islets, usually reached by a stone causeway. The visible structural remains are traditionally interpreted as duns or, in more recent terminology, as "Atlantic roundhouses". This terminology has recently become popular when describing the entire range of robust, drystone structures that existed in later prehistoric Atlantic Scotland.
The majority of crannog excavations were poorly conducted (by modern standards) in the late 19th and early 20th centuries by early antiquarians, or were purely accidental finds as lochs were drained during the improvements to increase usable farmland or pasture. In some early digs, labourers hauled away tons of materials, with little regard to anything that was not of immediate economic value. Conversely, the vast majority of early attempts at proper excavation failed to accurately measure or record stratigraphy, thereby failing to provide a secure context for artefact finds. Thus only extremely limited interpretations are possible. Preservation and conservation techniques for waterlogged materials such as logboats or structural material were all but non-existent, and a number of extremely important finds were destroyed as a result: in some instances dried out for firewood.
From about 1900 to the late 1940s there was very little crannog excavation in Scotland, while some important and highly influential contributions were made in Ireland. In contrast, relatively few crannogs have been excavated since the Second World War. But this number has steadily grown, especially since the early 1980s, and may soon surpass pre-war totals. The overwhelming majority of crannogs show multiple phases of occupation and re-use, often extending over centuries. Thus the re-occupiers may have viewed crannogs as a legacy that was alive in local tradition and memory. Crannog reoccupation is important and significant, especially in the many instances of crannogs built near natural islets, which were often completely unused. This long chronology of use has been verified by both radiocarbon dating and more precisely by dendrochronology.
Interpretations of crannog function have not been static; instead they appear to have changed in both the archaeological and historic records. Rather than the simple domestic residences of prehistory, the medieval crannogs were increasingly seen as strongholds of the upper class or regional political players, such as the Gaelic chieftains of the O'Boylans and McMahons in County Monaghan and the Kingdom of Airgíalla, until the 17th century. In Scotland, the medieval and post-medieval use of crannogs is also documented into the early 18th century. Whether this increase in status is real, or just a by-product of increasingly complex material assemblages, remains to be convincingly validated.
The earliest-known constructed crannog is the completely artificial Neolithic islet of Eilean Dòmhnuill, Loch Olabhat on North Uist in Scotland. Eilean Domhnuill has produced radiocarbon dates ranging from 3650 to 2500 BC. Irish crannogs appear in middle Bronze Age layers at Ballinderry (1200–600 BC). Recent radiocarbon dating of worked timber found in Loch Bhorghastail on the Isle of Lewis has produced evidence of crannogs as old as 3380-3630 BC. Prior to the Bronze Age, the existence of artificial island settlement in Ireland is not as clear. While lakeside settlements are evident in Ireland from 4500 BC, these settlements are not crannogs, as they were not intended to be islands. Despite having a lengthy chronology, their use was not at all consistent or unchanging.
Crannog construction and occupation was at its peak in Scotland from about 800 BC to AD 200. Not surprisingly, crannogs have useful defensive properties, although there appears to be more significance to prehistoric use than simple defense, as very few weapons or evidence for destruction appear in excavations of prehistoric crannogs. In Ireland, crannogs were at their zenith during the Early Historic period, when they were the homes and retreats of kings, lords, prosperous farmers and, occasionally, socially marginalised groups, such as monastic hermits or metalsmiths who could work in isolation. Despite scholarly concepts supporting a strict Early Historic evolution, Irish excavations are increasingly uncovering examples that date from the "missing" Iron Age in Ireland.
The construction techniques for a crannog (prehistoric or otherwise) are as varied as the multitude of finished forms that make up the archaeological record. Island settlement in Scotland and Ireland is manifest through the entire range of possibilities ranging from entirely natural, small islets to completely artificial islets, therefore definitions remain contentious. For crannogs in the strict sense, typically the construction effort began on a shallow reef or rise in the lochbed.
When timber was available, many crannogs were surrounded by a circle of wooden piles, with axe-sharpened bases that were driven into the bottom, forming a circular enclosure that helped to retain the main mound and prevent erosion. The piles could also be joined together by mortise and tenon, or large holes cut to carefully accept specially shaped timbers designed to interlock and provide structural rigidity. On other examples, interior surfaces were built up with any mixture of clay, peat, stone, timber or brush – whatever was available. In some instances, more than one structure was built on crannogs.
In other types of crannogs, builders and occupants added large stones to the waterline of small natural islets, extending and enlarging them over successive phases of renewal. Larger crannogs could be occupied by extended families or communal groups, and access was either by logboats or coracles. Evidence for timber or stone causeways exists on a large number of crannogs. The causeways may have been slightly submerged; this has been interpreted as a device to make access difficult but may also be a result of loch level fluctuations over the ensuing centuries or millennia. Organic remains are often found in excellent condition on these water-logged sites. The bones of cattle, deer, and swine have been found in excavated crannogs, while remains of wooden utensils and even dairy products have been completely preserved for several millennia.
In June 2021, the Loch Tay Crannog was seriously damaged in a fire but funding was given to repair the structure, and conserve the museum materials retained. The UNESCO Chair in Refugee Integration through Languages and the Arts, Professor Alison Phipps, OBE of Glasgow University and African artist Tawona Sithole considered its future and its impact as a symbol of common human history and 'potent ways of healing' including restarting the creative weaving with Soay sheep wool in 'a thousand touches'. | [
{
"paragraph_id": 0,
"text": "A crannog (/ˈkrænəɡ/; Irish: crannóg [ˈkɾˠan̪ˠoːɡ]; Scottish Gaelic: crannag [ˈkʰɾan̪ˠak]) is typically a partially or entirely artificial island, usually built in lakes and estuarine waters of Scotland, Wales, and Ireland. Unlike the prehistoric pile dwellings around the Alps, which were built on the shores and not inundated until later, crannogs were built in the water, thus forming artificial islands.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Crannogs were used as dwellings over five millennia, from the European Neolithic Period to as late as the 17th/early 18th century. In Scotland there is no convincing evidence in the archaeological record of Early and Middle Bronze Age or Norse Period use. The radiocarbon dating obtained from key sites such as Oakbank and Redcastle indicates at a 95.4 per cent confidence level that they date to the Late Bronze Age to Early Iron Age. The date ranges fall after around 800 BC and so could be considered Late Bronze Age by only the narrowest of margins.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Crannogs have been variously interpreted as free-standing wooden structures, as at Loch Tay, although more commonly they are composed of brush, stone or timber mounds that can be revetted with timber piles. However, in areas such as the Outer Hebrides of Scotland, timber was unavailable from the Neolithic era onwards. As a result, crannogs made completely of stone and supporting drystone architecture are common there. Today, crannogs typically appear as small, circular islets, often 10 to 30 metres (30 to 100 ft) in diameter, covered in dense vegetation due to their inaccessibility to grazing livestock.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The Irish word crannóg derives from Old Irish crannóc, which referred to a wooden structure or vessel, stemming from crann, which means \"tree\", suffixed with \"-óg\" which is a diminutive ending ultimately borrowed from Welsh. The suffix -óg is sometimes misunderstood by non-native Irish-speakers as óg, which is a separate word that means \"young\". This misunderstanding leads to a folk etymology whereby crannóg is misanalysed as crann óg, which is pronounced differently and means \"a young tree\". The modern sense of the term first appears sometime around the 12th century; its popularity spread in the medieval period along with the terms isle, ylle, inis, eilean or oileán.",
"title": "Etymology and uncertain meanings"
},
{
"paragraph_id": 4,
"text": "There is some confusion on what the term crannog originally referred to, as the structure atop the island or the island itself. The additional meanings of Irish crannóg can be variously related as 'structure/piece of wood', including 'crow's nest', 'pulpit', or 'driver's box on a coach'; 'vessel/box/chest' more generally; and 'wooden pin'. The Scottish Gaelic form is crannag and has the additional meanings of 'pulpit' and 'churn'. Thus, there is no real consensus on what the term crannog actually implies, although the modern adoption in the English language broadly refers to a partially or completely artificial islet that saw use from the prehistoric to the Post-Medieval period in Ireland and Scotland.",
"title": "Etymology and uncertain meanings"
},
{
"paragraph_id": 5,
"text": "Crannogs are widespread in Ireland, with an estimated 1,200 examples, while Scotland has 389 sites officially listed as such. The actual number in Scotland varies considerably depending on definition—between about 350 and 500, due to the use of the term \"island dun\" for well over one hundred Hebridean examples—a distinction that has created a divide between mainland Scottish crannog and Hebridean islet settlement studies. Previously unknown crannogs in Scotland and Ireland are still being found as underwater surveys continue to investigate loch beds for completely submerged examples.",
"title": "Location"
},
{
"paragraph_id": 6,
"text": "The largest concentrations of crannogs in Ireland are found in the Drumlin Belt of the Midlands, North and Northwest. In Scotland, crannogs are mostly found on the western coast, with high concentrations in Argyll and Dumfries and Galloway. In reality, the Western Isles contain the highest density of lake-settlements in Scotland, yet they are recognised under varying terms besides \"crannog\". One lone Welsh example exists at Llangorse Lake, probably a product of Irish influence.",
"title": "Location"
},
{
"paragraph_id": 7,
"text": "Reconstructed Irish crannógs are located at Craggaunowen, County Clare, in the Irish National Heritage Park, County Wexford and at Castle Espie, County Down. In Scotland there are reconstructions at the \"Scottish Crannog Centre\" at Loch Tay, Perthshire; this centre offers guided tours and hands-on activities, including wool-spinning, wood-turning and making fire, holds events to celebrate wild cooking and crafts, and hosts yearly Midsummer, Lughnasadh and Samhain festivals.",
"title": "Location"
},
{
"paragraph_id": 8,
"text": "Crannogs took on many different forms and methods of construction based on what was available in the immediate landscape. The classic image of a prehistoric crannog stems from both post-medieval illustrations and highly influential excavations, such as Milton Loch in Scotland by C. M. Piggot after World War II. The Milton Loch interpretation is of a small islet surrounded or defined at its edges by timber piles and a gangway, topped by a typical Iron Age roundhouse.",
"title": "Types and problems with definition"
},
{
"paragraph_id": 9,
"text": "The choice of a small islet as a home may seem odd today, yet waterways were the main channels for both communication and travel until the 19th century in much of Ireland and, especially, Highland Scotland. Crannogs are traditionally interpreted as simple prehistorical farmsteads. They are also interpreted as boltholes in times of danger, as status symbols with limited access, and as inherited locations of power that imply a sense of legitimacy and ancestry towards ownership of the surrounding landscape.",
"title": "Types and problems with definition"
},
{
"paragraph_id": 10,
"text": "A strict definition of a crannog, which has long been debated, requires the use of timber. Sites in the Western Isles do not satisfy this criterion, although their inhabitants shared the common habit of living on water. If not classed as \"true\" crannogs, small occupied islets (often at least partially artificial in nature) may be referred to as \"island duns., But, rather confusingly, 22 islet-based sites are classified as \"proper\" crannogs due to the different interpretations of the inspectors or excavators who drew up field reports.",
"title": "Types and problems with definition"
},
{
"paragraph_id": 11,
"text": "Hebridean island dwellings or crannogs were commonly built on both natural and artificial islets, usually reached by a stone causeway. The visible structural remains are traditionally interpreted as duns or, in more recent terminology, as \"Atlantic roundhouses\". This terminology has recently become popular when describing the entire range of robust, drystone structures that existed in later prehistoric Atlantic Scotland.",
"title": "Types and problems with definition"
},
{
"paragraph_id": 12,
"text": "The majority of crannog excavations were poorly conducted (by modern standards) in the late 19th and early 20th centuries by early antiquarians, or were purely accidental finds as lochs were drained during the improvements to increase usable farmland or pasture. In some early digs, labourers hauled away tons of materials, with little regard to anything that was not of immediate economic value. Conversely, the vast majority of early attempts at proper excavation failed to accurately measure or record stratigraphy, thereby failing to provide a secure context for artefact finds. Thus only extremely limited interpretations are possible. Preservation and conservation techniques for waterlogged materials such as logboats or structural material were all but non-existent, and a number of extremely important finds were destroyed as a result: in some instances dried out for firewood.",
"title": "Types and problems with definition"
},
{
"paragraph_id": 13,
"text": "From about 1900 to the late 1940s there was very little crannog excavation in Scotland, while some important and highly influential contributions were made in Ireland. In contrast, relatively few crannogs have been excavated since the Second World War. But this number has steadily grown, especially since the early 1980s, and may soon surpass pre-war totals. The overwhelming majority of crannogs show multiple phases of occupation and re-use, often extending over centuries. Thus the re-occupiers may have viewed crannogs as a legacy that was alive in local tradition and memory. Crannog reoccupation is important and significant, especially in the many instances of crannogs built near natural islets, which were often completely unused. This long chronology of use has been verified by both radiocarbon dating and more precisely by dendrochronology.",
"title": "Types and problems with definition"
},
{
"paragraph_id": 14,
"text": "Interpretations of crannog function have not been static; instead they appear to have changed in both the archaeological and historic records. Rather than the simple domestic residences of prehistory, the medieval crannogs were increasingly seen as strongholds of the upper class or regional political players, such as the Gaelic chieftains of the O'Boylans and McMahons in County Monaghan and the Kingdom of Airgíalla, until the 17th century. In Scotland, the medieval and post-medieval use of crannogs is also documented into the early 18th century. Whether this increase in status is real, or just a by-product of increasingly complex material assemblages, remains to be convincingly validated.",
"title": "Types and problems with definition"
},
{
"paragraph_id": 15,
"text": "The earliest-known constructed crannog is the completely artificial Neolithic islet of Eilean Dòmhnuill, Loch Olabhat on North Uist in Scotland. Eilean Domhnuill has produced radiocarbon dates ranging from 3650 to 2500 BC. Irish crannogs appear in middle Bronze Age layers at Ballinderry (1200–600 BC). Recent radiocarbon dating of worked timber found in Loch Bhorghastail on the Isle of Lewis has produced evidence of crannogs as old as 3380-3630 BC. Prior to the Bronze Age, the existence of artificial island settlement in Ireland is not as clear. While lakeside settlements are evident in Ireland from 4500 BC, these settlements are not crannogs, as they were not intended to be islands. Despite having a lengthy chronology, their use was not at all consistent or unchanging.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "Crannog construction and occupation was at its peak in Scotland from about 800 BC to AD 200. Not surprisingly, crannogs have useful defensive properties, although there appears to be more significance to prehistoric use than simple defense, as very few weapons or evidence for destruction appear in excavations of prehistoric crannogs. In Ireland, crannogs were at their zenith during the Early Historic period, when they were the homes and retreats of kings, lords, prosperous farmers and, occasionally, socially marginalised groups, such as monastic hermits or metalsmiths who could work in isolation. Despite scholarly concepts supporting a strict Early Historic evolution, Irish excavations are increasingly uncovering examples that date from the \"missing\" Iron Age in Ireland.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "The construction techniques for a crannog (prehistoric or otherwise) are as varied as the multitude of finished forms that make up the archaeological record. Island settlement in Scotland and Ireland is manifest through the entire range of possibilities ranging from entirely natural, small islets to completely artificial islets, therefore definitions remain contentious. For crannogs in the strict sense, typically the construction effort began on a shallow reef or rise in the lochbed.",
"title": "Construction"
},
{
"paragraph_id": 18,
"text": "When timber was available, many crannogs were surrounded by a circle of wooden piles, with axe-sharpened bases that were driven into the bottom, forming a circular enclosure that helped to retain the main mound and prevent erosion. The piles could also be joined together by mortise and tenon, or large holes cut to carefully accept specially shaped timbers designed to interlock and provide structural rigidity. On other examples, interior surfaces were built up with any mixture of clay, peat, stone, timber or brush – whatever was available. In some instances, more than one structure was built on crannogs.",
"title": "Construction"
},
{
"paragraph_id": 19,
"text": "In other types of crannogs, builders and occupants added large stones to the waterline of small natural islets, extending and enlarging them over successive phases of renewal. Larger crannogs could be occupied by extended families or communal groups, and access was either by logboats or coracles. Evidence for timber or stone causeways exists on a large number of crannogs. The causeways may have been slightly submerged; this has been interpreted as a device to make access difficult but may also be a result of loch level fluctuations over the ensuing centuries or millennia. Organic remains are often found in excellent condition on these water-logged sites. The bones of cattle, deer, and swine have been found in excavated crannogs, while remains of wooden utensils and even dairy products have been completely preserved for several millennia.",
"title": "Construction"
},
{
"paragraph_id": 20,
"text": "In June 2021, the Loch Tay Crannog was seriously damaged in a fire but funding was given to repair the structure, and conserve the museum materials retained. The UNESCO Chair in Refugee Integration through Languages and the Arts, Professor Alison Phipps, OBE of Glasgow University and African artist Tawona Sithole considered its future and its impact as a symbol of common human history and 'potent ways of healing' including restarting the creative weaving with Soay sheep wool in 'a thousand touches'.",
"title": "Construction"
}
] | A crannog is typically a partially or entirely artificial island, usually built in lakes and estuarine waters of Scotland, Wales, and Ireland. Unlike the prehistoric pile dwellings around the Alps, which were built on the shores and not inundated until later, crannogs were built in the water, thus forming artificial islands. Crannogs were used as dwellings over five millennia, from the European Neolithic Period to as late as the 17th/early 18th century. In Scotland there is no convincing evidence in the archaeological record of Early and Middle Bronze Age or Norse Period use. The radiocarbon dating obtained from key sites such as Oakbank and Redcastle indicates at a 95.4 per cent confidence level that they date to the Late Bronze Age to Early Iron Age. The date ranges fall after around 800 BC and so could be considered Late Bronze Age by only the narrowest of margins. Crannogs have been variously interpreted as free-standing wooden structures, as at Loch Tay, although more commonly they are composed of brush, stone or timber mounds that can be revetted with timber piles. However, in areas such as the Outer Hebrides of Scotland, timber was unavailable from the Neolithic era onwards. As a result, crannogs made completely of stone and supporting drystone architecture are common there. Today, crannogs typically appear as small, circular islets, often 10 to 30 metres in diameter, covered in dense vegetation due to their inaccessibility to grazing livestock. | 2001-11-16T11:10:21Z | 2023-11-29T15:27:52Z | [
"Template:Cite book",
"Template:Webarchive",
"Template:Lang-ga",
"Template:Convert",
"Template:Cite journal",
"Template:Fortifications",
"Template:IPA-gd",
"Template:Reflist",
"Template:Cite web",
"Template:Commons category",
"Template:Prehistoric Scotland",
"Template:Scottish architecture",
"Template:IPA-ga",
"Template:Use dmy dates",
"Template:IPAc-en",
"Template:Lang-gd",
"Template:Lang",
"Template:Short description"
] | https://en.wikipedia.org/wiki/Crannog |
7,123 | Calendar date | A calendar date is a reference to a particular day represented within a calendar system. The calendar date allows the specific day to be identified. The number of days between two dates may be calculated. For example, "25 December 2023" is ten days after "15 December 2023". The date of a particular event depends on the observed time zone. For example, the air attack on Pearl Harbor that began at 7:48 a.m. Hawaiian time on 7 December 1941 took place at 3:18 a.m. Japan Standard Time, 8 December in Japan.
A particular day may be assigned a different nominal date according to the calendar used, so an identifying suffix may be needed where ambiguity may arise. The Gregorian calendar is the world's most widely used civil calendar, and is designated (in English) as AD or CE. Many cultures use religious or regnal calendars such as the Gregorian (Western Christendom, AD), Hebrew calendar (Judaism, AM), the Hijri calendars (Islam, AH), Julian calendar (Eastern Christendom, AD) or any other of the many calendars used around the world. In most calendar systems, the date consists of three parts: the (numbered) day of the month, the month, and the (numbered) year. There may also be additional parts, such as the day of the week. Years are usually counted from a particular starting point, usually called the epoch, with era referring to the span of time since that epoch.
A date without the year may also be referred to as a date or calendar date (such as "17 December" rather than "17 December 2023"). As such, it is either shorthand for the current year or it defines the day of an annual event, such as a birthday on 31 May, a holiday on 1 September, or Christmas on 25 December.
Many computer systems internally store points in time in Unix time format or some other system time format. The date (Unix) command—internally using the C date and time functions—can be used to convert that internal representation of a point in time to most of the date representations shown here.
There is a large variety of formats for dates in use, which differ in the order of date components. These variations use the sample date of 31 May 2006: (e.g. 31/05/2006, 05/31/2006, 2006/05/31), component separators (e.g. 31.05.2006, 31/05/2006, 31-05-2006), whether leading zeros are included (e.g. 31/5/2006 vs. 31/05/2006), whether all four digits of the year are written (e.g., 31.05.2006 vs. 31.05.06), and whether the month is represented in Arabic or Roman numerals or by name (e.g. 31.05.2006, 31.V.2006 vs. 31 May 2006).
This little-endian sequence is used by a majority of the world and is the preferred form by the United Nations when writing the full date format in official documents. This date format originates from the custom of writing the date as "the Nth day of [month] in the year of our Lord [year]" in Western religious and legal documents. The format has shortened over time but the order of the elements has remained constant. The following examples use the date of 9 November 2006. (With the years 2000–2009, care must be taken to ensure that two digit years do not intend to be 1900–1909 or other similar years.) The dots have a function of ordinal dot.
In this format, the most significant data item is written before lesser data items i.e. the year before the month before the day. It is consistent with the big-endianness of the Hindu–Arabic numeral system, which progresses from the highest to the lowest order magnitude. That is, using this format textual orderings and chronological orderings are identical. This form is standard in East Asia, Iran, Lithuania, Hungary, and Sweden; and some other countries to a limited extent.
Examples for the 9th of November 2003:
It is also extended through the universal big-endian format clock time: 9 November 2003, 18h 14m 12s, or 2003/11/9/18:14:12 or (ISO 8601) 2003-11-09T18:14:12.
This sequence is used primarily in the Philippines and the United States. It is also used to varying extents in Canada (though never in Quebec). This date format was commonly used alongside the little-endian form in the United Kingdom until the mid-20th century and can be found in both defunct and modern print media such as the London Gazette and The Times, respectively. This format was also commonly used by several English-language print media in many former British colonies and also one of two formats commonly used in India during British Raj era until the mid-20th century. In the United States, it is said as of Sunday, November 9, for example, although usage of "the" is not uncommon (e.g. Sunday, November the 9th, and even November the 9th, Sunday, are also possible and readily understood).
The modern convention is to avoid using the ordinal (th, st, rd, nd) form of numbers when the day follows the month (July 4 or July 4, 2006). The ordinal was common in the past and is still sometimes used ([the] 4th [of] July or July 4th).
This date format is used in Kazakhstan, Latvia, Nepal, and Turkmenistan. According to the official rules of documenting dates by governmental authorities, the long date format in Kazakh is written in the year–day–month order, e.g. 2006 5 April (Kazakh: 2006 жылғы 05 сәуір).
There are several standards that specify date formats:
Many numerical forms can create confusion when used in international correspondence, particularly when abbreviating the year to its final two digits, with no context. For example, "07/08/06" could refer to either 7 August 2006 or July 8, 2006 (or 1906, or the sixth year of any century), or 2007 August 6, and even in some extremely rare cases it could mean 2007 8 June.
The date format of YYYY-MM-DD in ISO 8601, as well as other international standards, have been adopted for many applications for reasons including reducing transnational ambiguity and simplifying machine processing.
An early U.S. Federal Information Processing Standard recommended 2-digit years. This is now widely recognized as extremely problematic, because of the year 2000 problem. Some U.S. government agencies now use ISO 8601 with 4-digit years.
When transitioning from one calendar or date notation to another, a format that includes both styles may be developed; for example Old Style and New Style dates in the transition from the Julian to the Gregorian calendar.
One of the advantages of using the ISO 8601 date format is that the lexicographical order (ASCIIbetical) of the representations is equivalent to the chronological order of the dates, assuming that all dates are in the same time zone. Thus dates can be sorted using simple string comparison algorithms, and indeed by any left to right collation. For example:
The YYYY-MM-DD layout is the only common format that can provide this. Sorting other date representations involves some parsing of the date strings. This also works when a time in 24-hour format is included after the date, as long as all times are understood to be in the same time zone.
ISO 8601 is used widely where concise, human-readable yet easily computable and unambiguous dates are required, although many applications store dates internally as UNIX time and only convert to ISO 8601 for display. All modern computer Operating Systems retain date information of files outside of their titles, allowing the user to choose which format they prefer and have them sorted thus, irrespective of the files' names.
The U.S. military sometimes uses a system, which they call "Julian date format" that indicates the year and the actual day out of the 365 days of the year (and thus a designation of the month would not be needed). For example, "11 December 1999" can be written in some contexts as "1999345" or "99345", for the 345th day of 1999. This system is most often used in US military logistics since it simplifies the process of calculating estimated shipping and arrival dates. For example: say a tank engine takes an estimated 35 days to ship by sea from the US to South Korea. If the engine is sent on 06104 (Friday, 14 April 2006), it should arrive on 06139 (Friday, 19 May). Outside of the US military and some US government agencies, including the Internal Revenue Service, this format is usually referred to as "ordinal date", rather than "Julian date".
Such ordinal date formats are also used by many computer programs (especially those for mainframe systems). Using a three-digit Julian day number saves one byte of computer storage over a two-digit month plus two-digit day, for example, "January 17" is 017 in Julian versus 0117 in month-day format. OS/390 or its successor, z/OS, display dates in yy.ddd format for most operations.
UNIX time stores time as a number in seconds since the beginning of the UNIX Epoch (1970-01-01).
Another "ordinal" date system ("ordinal" in the sense of advancing in value by one as the date advances by one day) is in common use in astronomical calculations and referencing and uses the same name as this "logistics" system. The continuity of representation of period regardless of the time of year being considered is highly useful to both groups of specialists. The astronomers describe their system as also being a "Julian date" system.
Companies in Europe often use year, week number, and day for planning purposes. So, for example, an event in a project can happen on w43 (week 43) or w43-1 (Monday, week 43) or, if the year needs to be indicated, on w0643 (the year 2006, week 43; i.e., Monday 23 October–Sunday 29 October 2006).
An ISO week-numbering year has 52 or 53 full weeks. That is 364 or 371 days instead of the conventional Gregorian year of 365 or 366 days. These 53 week years occur on all years that have Thursday as the 1st of January and on leap years that start on Wednesday the 1st. The extra week is sometimes referred to as a 'leap week', although ISO 8601 does not use this term.
In English-language outside North America (mostly in Anglophone Europe and some countries in Australasia), full dates are written as 7 December 1941 (or 7th December 1941) and spoken as "the seventh of December, nineteen forty-one" (exceedingly common usage of "the" and "of"), with the occasional usage of December 7, 1941 ("December the seventh, nineteen forty-one"). In common with most continental European usage, however, all-numeric dates are invariably ordered dd/mm/yyyy.
In Canada and the United States, the usual written form is December 7, 1941, spoken as "December seventh, nineteen forty-one" or colloquially "December the seventh, nineteen forty-one". Ordinal numerals, however, are not always used when writing and pronouncing dates, and "December seven, nineteen forty-one" is also an accepted pronunciation of the date written December 7, 1941. A notable exception to this rule is the Fourth of July (U.S. Independence Day). | [
{
"paragraph_id": 0,
"text": "A calendar date is a reference to a particular day represented within a calendar system. The calendar date allows the specific day to be identified. The number of days between two dates may be calculated. For example, \"25 December 2023\" is ten days after \"15 December 2023\". The date of a particular event depends on the observed time zone. For example, the air attack on Pearl Harbor that began at 7:48 a.m. Hawaiian time on 7 December 1941 took place at 3:18 a.m. Japan Standard Time, 8 December in Japan.",
"title": ""
},
{
"paragraph_id": 1,
"text": "A particular day may be assigned a different nominal date according to the calendar used, so an identifying suffix may be needed where ambiguity may arise. The Gregorian calendar is the world's most widely used civil calendar, and is designated (in English) as AD or CE. Many cultures use religious or regnal calendars such as the Gregorian (Western Christendom, AD), Hebrew calendar (Judaism, AM), the Hijri calendars (Islam, AH), Julian calendar (Eastern Christendom, AD) or any other of the many calendars used around the world. In most calendar systems, the date consists of three parts: the (numbered) day of the month, the month, and the (numbered) year. There may also be additional parts, such as the day of the week. Years are usually counted from a particular starting point, usually called the epoch, with era referring to the span of time since that epoch.",
"title": ""
},
{
"paragraph_id": 2,
"text": "A date without the year may also be referred to as a date or calendar date (such as \"17 December\" rather than \"17 December 2023\"). As such, it is either shorthand for the current year or it defines the day of an annual event, such as a birthday on 31 May, a holiday on 1 September, or Christmas on 25 December.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Many computer systems internally store points in time in Unix time format or some other system time format. The date (Unix) command—internally using the C date and time functions—can be used to convert that internal representation of a point in time to most of the date representations shown here.",
"title": ""
},
{
"paragraph_id": 4,
"text": "There is a large variety of formats for dates in use, which differ in the order of date components. These variations use the sample date of 31 May 2006: (e.g. 31/05/2006, 05/31/2006, 2006/05/31), component separators (e.g. 31.05.2006, 31/05/2006, 31-05-2006), whether leading zeros are included (e.g. 31/5/2006 vs. 31/05/2006), whether all four digits of the year are written (e.g., 31.05.2006 vs. 31.05.06), and whether the month is represented in Arabic or Roman numerals or by name (e.g. 31.05.2006, 31.V.2006 vs. 31 May 2006).",
"title": "Date format"
},
{
"paragraph_id": 5,
"text": "This little-endian sequence is used by a majority of the world and is the preferred form by the United Nations when writing the full date format in official documents. This date format originates from the custom of writing the date as \"the Nth day of [month] in the year of our Lord [year]\" in Western religious and legal documents. The format has shortened over time but the order of the elements has remained constant. The following examples use the date of 9 November 2006. (With the years 2000–2009, care must be taken to ensure that two digit years do not intend to be 1900–1909 or other similar years.) The dots have a function of ordinal dot.",
"title": "Date format"
},
{
"paragraph_id": 6,
"text": "In this format, the most significant data item is written before lesser data items i.e. the year before the month before the day. It is consistent with the big-endianness of the Hindu–Arabic numeral system, which progresses from the highest to the lowest order magnitude. That is, using this format textual orderings and chronological orderings are identical. This form is standard in East Asia, Iran, Lithuania, Hungary, and Sweden; and some other countries to a limited extent.",
"title": "Date format"
},
{
"paragraph_id": 7,
"text": "Examples for the 9th of November 2003:",
"title": "Date format"
},
{
"paragraph_id": 8,
"text": "It is also extended through the universal big-endian format clock time: 9 November 2003, 18h 14m 12s, or 2003/11/9/18:14:12 or (ISO 8601) 2003-11-09T18:14:12.",
"title": "Date format"
},
{
"paragraph_id": 9,
"text": "",
"title": "Date format"
},
{
"paragraph_id": 10,
"text": "This sequence is used primarily in the Philippines and the United States. It is also used to varying extents in Canada (though never in Quebec). This date format was commonly used alongside the little-endian form in the United Kingdom until the mid-20th century and can be found in both defunct and modern print media such as the London Gazette and The Times, respectively. This format was also commonly used by several English-language print media in many former British colonies and also one of two formats commonly used in India during British Raj era until the mid-20th century. In the United States, it is said as of Sunday, November 9, for example, although usage of \"the\" is not uncommon (e.g. Sunday, November the 9th, and even November the 9th, Sunday, are also possible and readily understood).",
"title": "Date format"
},
{
"paragraph_id": 11,
"text": "The modern convention is to avoid using the ordinal (th, st, rd, nd) form of numbers when the day follows the month (July 4 or July 4, 2006). The ordinal was common in the past and is still sometimes used ([the] 4th [of] July or July 4th).",
"title": "Date format"
},
{
"paragraph_id": 12,
"text": "This date format is used in Kazakhstan, Latvia, Nepal, and Turkmenistan. According to the official rules of documenting dates by governmental authorities, the long date format in Kazakh is written in the year–day–month order, e.g. 2006 5 April (Kazakh: 2006 жылғы 05 сәуір).",
"title": "Date format"
},
{
"paragraph_id": 13,
"text": "There are several standards that specify date formats:",
"title": "Date format"
},
{
"paragraph_id": 14,
"text": "Many numerical forms can create confusion when used in international correspondence, particularly when abbreviating the year to its final two digits, with no context. For example, \"07/08/06\" could refer to either 7 August 2006 or July 8, 2006 (or 1906, or the sixth year of any century), or 2007 August 6, and even in some extremely rare cases it could mean 2007 8 June.",
"title": "Date format"
},
{
"paragraph_id": 15,
"text": "The date format of YYYY-MM-DD in ISO 8601, as well as other international standards, have been adopted for many applications for reasons including reducing transnational ambiguity and simplifying machine processing.",
"title": "Date format"
},
{
"paragraph_id": 16,
"text": "An early U.S. Federal Information Processing Standard recommended 2-digit years. This is now widely recognized as extremely problematic, because of the year 2000 problem. Some U.S. government agencies now use ISO 8601 with 4-digit years.",
"title": "Date format"
},
{
"paragraph_id": 17,
"text": "When transitioning from one calendar or date notation to another, a format that includes both styles may be developed; for example Old Style and New Style dates in the transition from the Julian to the Gregorian calendar.",
"title": "Date format"
},
{
"paragraph_id": 18,
"text": "One of the advantages of using the ISO 8601 date format is that the lexicographical order (ASCIIbetical) of the representations is equivalent to the chronological order of the dates, assuming that all dates are in the same time zone. Thus dates can be sorted using simple string comparison algorithms, and indeed by any left to right collation. For example:",
"title": "Advantages for ordering in sequence"
},
{
"paragraph_id": 19,
"text": "The YYYY-MM-DD layout is the only common format that can provide this. Sorting other date representations involves some parsing of the date strings. This also works when a time in 24-hour format is included after the date, as long as all times are understood to be in the same time zone.",
"title": "Advantages for ordering in sequence"
},
{
"paragraph_id": 20,
"text": "ISO 8601 is used widely where concise, human-readable yet easily computable and unambiguous dates are required, although many applications store dates internally as UNIX time and only convert to ISO 8601 for display. All modern computer Operating Systems retain date information of files outside of their titles, allowing the user to choose which format they prefer and have them sorted thus, irrespective of the files' names.",
"title": "Advantages for ordering in sequence"
},
{
"paragraph_id": 21,
"text": "The U.S. military sometimes uses a system, which they call \"Julian date format\" that indicates the year and the actual day out of the 365 days of the year (and thus a designation of the month would not be needed). For example, \"11 December 1999\" can be written in some contexts as \"1999345\" or \"99345\", for the 345th day of 1999. This system is most often used in US military logistics since it simplifies the process of calculating estimated shipping and arrival dates. For example: say a tank engine takes an estimated 35 days to ship by sea from the US to South Korea. If the engine is sent on 06104 (Friday, 14 April 2006), it should arrive on 06139 (Friday, 19 May). Outside of the US military and some US government agencies, including the Internal Revenue Service, this format is usually referred to as \"ordinal date\", rather than \"Julian date\".",
"title": "Specialized usage"
},
{
"paragraph_id": 22,
"text": "Such ordinal date formats are also used by many computer programs (especially those for mainframe systems). Using a three-digit Julian day number saves one byte of computer storage over a two-digit month plus two-digit day, for example, \"January 17\" is 017 in Julian versus 0117 in month-day format. OS/390 or its successor, z/OS, display dates in yy.ddd format for most operations.",
"title": "Specialized usage"
},
{
"paragraph_id": 23,
"text": "UNIX time stores time as a number in seconds since the beginning of the UNIX Epoch (1970-01-01).",
"title": "Specialized usage"
},
{
"paragraph_id": 24,
"text": "Another \"ordinal\" date system (\"ordinal\" in the sense of advancing in value by one as the date advances by one day) is in common use in astronomical calculations and referencing and uses the same name as this \"logistics\" system. The continuity of representation of period regardless of the time of year being considered is highly useful to both groups of specialists. The astronomers describe their system as also being a \"Julian date\" system.",
"title": "Specialized usage"
},
{
"paragraph_id": 25,
"text": "Companies in Europe often use year, week number, and day for planning purposes. So, for example, an event in a project can happen on w43 (week 43) or w43-1 (Monday, week 43) or, if the year needs to be indicated, on w0643 (the year 2006, week 43; i.e., Monday 23 October–Sunday 29 October 2006).",
"title": "Specialized usage"
},
{
"paragraph_id": 26,
"text": "An ISO week-numbering year has 52 or 53 full weeks. That is 364 or 371 days instead of the conventional Gregorian year of 365 or 366 days. These 53 week years occur on all years that have Thursday as the 1st of January and on leap years that start on Wednesday the 1st. The extra week is sometimes referred to as a 'leap week', although ISO 8601 does not use this term.",
"title": "Specialized usage"
},
{
"paragraph_id": 27,
"text": "In English-language outside North America (mostly in Anglophone Europe and some countries in Australasia), full dates are written as 7 December 1941 (or 7th December 1941) and spoken as \"the seventh of December, nineteen forty-one\" (exceedingly common usage of \"the\" and \"of\"), with the occasional usage of December 7, 1941 (\"December the seventh, nineteen forty-one\"). In common with most continental European usage, however, all-numeric dates are invariably ordered dd/mm/yyyy.",
"title": "Specialized usage"
},
{
"paragraph_id": 28,
"text": "In Canada and the United States, the usual written form is December 7, 1941, spoken as \"December seventh, nineteen forty-one\" or colloquially \"December the seventh, nineteen forty-one\". Ordinal numerals, however, are not always used when writing and pronouncing dates, and \"December seven, nineteen forty-one\" is also an accepted pronunciation of the date written December 7, 1941. A notable exception to this rule is the Fourth of July (U.S. Independence Day).",
"title": "Specialized usage"
}
] | A calendar date is a reference to a particular day represented within a calendar system. The calendar date allows the specific day to be identified. The number of days between two dates may be calculated. For example, "25 December 2023" is ten days after "15 December 2023". The date of a particular event depends on the observed time zone. For example, the air attack on Pearl Harbor that began at 7:48 a.m. Hawaiian time on 7 December 1941 took place at 3:18 a.m. Japan Standard Time, 8 December in Japan. A particular day may be assigned a different nominal date according to the calendar used, so an identifying suffix may be needed where ambiguity may arise. The Gregorian calendar is the world's most widely used civil calendar, and is designated as AD or CE. Many cultures use religious or regnal calendars such as the Gregorian, Hebrew calendar, the Hijri calendars, Julian calendar or any other of the many calendars used around the world. In most calendar systems, the date consists of three parts: the (numbered) day of the month, the month, and the (numbered) year. There may also be additional parts, such as the day of the week. Years are usually counted from a particular starting point, usually called the epoch, with era referring to the span of time since that epoch. A date without the year may also be referred to as a date or calendar date. As such, it is either shorthand for the current year or it defines the day of an annual event, such as a birthday on 31 May, a holiday on 1 September, or Christmas on 25 December. Many computer systems internally store points in time in Unix time format or some other system time format.
The date (Unix) command—internally using the C date and time functions—can be used to convert that internal representation of a point in time to most of the date representations shown here. | 2001-11-16T11:51:50Z | 2023-12-20T20:34:37Z | [
"Template:Multiple issues",
"Template:Legend",
"Template:Nowrap",
"Template:Sic",
"Template:Dubious",
"Template:Cite news",
"Template:Cite IETF",
"Template:Citation needed span",
"Template:See also",
"Template:Cn",
"Template:Citation needed",
"Template:Code",
"Template:Ndash",
"Template:Notelist",
"Template:Short description",
"Template:Lang-kz",
"Template:Anchor",
"Template:Nbsp",
"Template:Cite web",
"Template:ISBN",
"Template:Efn",
"Template:More",
"Template:Reflist",
"Template:Sfrac",
"Template:Authority control",
"Template:Bsn",
"Template:Cite book",
"Template:IETF RFC",
"Template:Small",
"Template:Use American English"
] | https://en.wikipedia.org/wiki/Calendar_date |
7,124 | Cist | A cist (/ˈkɪst/; also kist /ˈkɪst/; from Greek: κίστη, Middle Welsh Kist or Germanic Kiste) is a small stone-built coffin-like box or ossuary used to hold the bodies of the dead. Examples can be found across Europe and in the Middle East. A cist may have been associated with other monuments, perhaps under a cairn or long barrow. Several cists are sometimes found close together within the same cairn or barrow. Often ornaments have been found within an excavated cist, indicating the wealth or prominence of the interred individual.
This old word is preserved in the Nordic languages as "kista" in Swedish and "kiste" in Danish and Norwegian, where it is the word for a funerary coffin. In English it is related to "cistern". | [
{
"paragraph_id": 0,
"text": "A cist (/ˈkɪst/; also kist /ˈkɪst/; from Greek: κίστη, Middle Welsh Kist or Germanic Kiste) is a small stone-built coffin-like box or ossuary used to hold the bodies of the dead. Examples can be found across Europe and in the Middle East. A cist may have been associated with other monuments, perhaps under a cairn or long barrow. Several cists are sometimes found close together within the same cairn or barrow. Often ornaments have been found within an excavated cist, indicating the wealth or prominence of the interred individual.",
"title": ""
},
{
"paragraph_id": 1,
"text": "This old word is preserved in the Nordic languages as \"kista\" in Swedish and \"kiste\" in Danish and Norwegian, where it is the word for a funerary coffin. In English it is related to \"cistern\".",
"title": ""
}
] | A cist is a small stone-built coffin-like box or ossuary used to hold the bodies of the dead. Examples can be found across Europe and in the Middle East.
A cist may have been associated with other monuments, perhaps under a cairn or long barrow. Several cists are sometimes found close together within the same cairn or barrow. Often ornaments have been found within an excavated cist, indicating the wealth or prominence of the interred individual. This old word is preserved in the Nordic languages as "kista" in Swedish and "kiste" in Danish and Norwegian, where it is the word for a funerary coffin. In English it is related to "cistern". | 2023-06-26T15:09:32Z | [
"Template:Lang-grc-gre",
"Template:Reflist",
"Template:Commonscat",
"Template:Prehistoric technology",
"Template:Cite journal",
"Template:Short description",
"Template:Other uses",
"Template:IPAc-en",
"Template:Lang",
"Template:Cn",
"Template:Cite book",
"Template:Cite web",
"Template:Neolithic Europe"
] | https://en.wikipedia.org/wiki/Cist |
|
7,125 | Center (group theory) | In abstract algebra, the center of a group, G, is the set of elements that commute with every element of G. It is denoted Z(G), from German Zentrum, meaning center. In set-builder notation,
The center is a normal subgroup, Z(G) ⊲ G. As a subgroup, it is always characteristic, but is not necessarily fully characteristic. The quotient group, G / Z(G), is isomorphic to the inner automorphism group, Inn(G).
A group G is abelian if and only if Z(G) = G. At the other extreme, a group is said to be centerless if Z(G) is trivial; i.e., consists only of the identity element.
The elements of the center are sometimes called central.
The center of G is always a subgroup of G. In particular:
Furthermore, the center of G is always an abelian and normal subgroup of G. Since all elements of Z(G) commute, it is closed under conjugation.
Note that a homomorphism f: G → H between groups generally does not restrict to a homomorphism between their centers. Although f (Z (G)) commutes with f ( G ), unless f is surjective f (Z (G)) need not commute with all of H and therefore need not be a subset of Z ( H ). Put another way, there is no "center" functor between categories Grp and Ab. Even though we can map objects, we cannot map arrows.
By definition, the center is the set of elements for which the conjugacy class of each element is the element itself; i.e., Cl(g) = {g}.
The center is also the intersection of all the centralizers of each element of G. As centralizers are subgroups, this again shows that the center is a subgroup.
Consider the map, f: G → Aut(G), from G to the automorphism group of G defined by f(g) = ϕg, where ϕg is the automorphism of G defined by
The function, f is a group homomorphism, and its kernel is precisely the center of G, and its image is called the inner automorphism group of G, denoted Inn(G). By the first isomorphism theorem we get,
The cokernel of this map is the group Out(G) of outer automorphisms, and these form the exact sequence
Quotienting out by the center of a group yields a sequence of groups called the upper central series:
The kernel of the map G → Gi is the ith center of G (second center, third center, etc.) and is denoted Z(G). Concretely, the (i + 1)-st center are the terms that commute with all elements up to an element of the ith center. Following this definition, one can define the 0th center of a group to be the identity subgroup. This can be continued to transfinite ordinals by transfinite induction; the union of all the higher centers is called the hypercenter.
The ascending chain of subgroups
stabilizes at i (equivalently, Z(G) = Z(G)) if and only if Gi is centerless. | [
{
"paragraph_id": 0,
"text": "In abstract algebra, the center of a group, G, is the set of elements that commute with every element of G. It is denoted Z(G), from German Zentrum, meaning center. In set-builder notation,",
"title": ""
},
{
"paragraph_id": 1,
"text": "The center is a normal subgroup, Z(G) ⊲ G. As a subgroup, it is always characteristic, but is not necessarily fully characteristic. The quotient group, G / Z(G), is isomorphic to the inner automorphism group, Inn(G).",
"title": ""
},
{
"paragraph_id": 2,
"text": "A group G is abelian if and only if Z(G) = G. At the other extreme, a group is said to be centerless if Z(G) is trivial; i.e., consists only of the identity element.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The elements of the center are sometimes called central.",
"title": ""
},
{
"paragraph_id": 4,
"text": "The center of G is always a subgroup of G. In particular:",
"title": "As a subgroup"
},
{
"paragraph_id": 5,
"text": "Furthermore, the center of G is always an abelian and normal subgroup of G. Since all elements of Z(G) commute, it is closed under conjugation.",
"title": "As a subgroup"
},
{
"paragraph_id": 6,
"text": "Note that a homomorphism f: G → H between groups generally does not restrict to a homomorphism between their centers. Although f (Z (G)) commutes with f ( G ), unless f is surjective f (Z (G)) need not commute with all of H and therefore need not be a subset of Z ( H ). Put another way, there is no \"center\" functor between categories Grp and Ab. Even though we can map objects, we cannot map arrows.",
"title": "As a subgroup"
},
{
"paragraph_id": 7,
"text": "By definition, the center is the set of elements for which the conjugacy class of each element is the element itself; i.e., Cl(g) = {g}.",
"title": "Conjugacy classes and centralizers"
},
{
"paragraph_id": 8,
"text": "The center is also the intersection of all the centralizers of each element of G. As centralizers are subgroups, this again shows that the center is a subgroup.",
"title": "Conjugacy classes and centralizers"
},
{
"paragraph_id": 9,
"text": "Consider the map, f: G → Aut(G), from G to the automorphism group of G defined by f(g) = ϕg, where ϕg is the automorphism of G defined by",
"title": "Conjugation"
},
{
"paragraph_id": 10,
"text": "The function, f is a group homomorphism, and its kernel is precisely the center of G, and its image is called the inner automorphism group of G, denoted Inn(G). By the first isomorphism theorem we get,",
"title": "Conjugation"
},
{
"paragraph_id": 11,
"text": "The cokernel of this map is the group Out(G) of outer automorphisms, and these form the exact sequence",
"title": "Conjugation"
},
{
"paragraph_id": 12,
"text": "Quotienting out by the center of a group yields a sequence of groups called the upper central series:",
"title": "Higher centers"
},
{
"paragraph_id": 13,
"text": "The kernel of the map G → Gi is the ith center of G (second center, third center, etc.) and is denoted Z(G). Concretely, the (i + 1)-st center are the terms that commute with all elements up to an element of the ith center. Following this definition, one can define the 0th center of a group to be the identity subgroup. This can be continued to transfinite ordinals by transfinite induction; the union of all the higher centers is called the hypercenter.",
"title": "Higher centers"
},
{
"paragraph_id": 14,
"text": "The ascending chain of subgroups",
"title": "Higher centers"
},
{
"paragraph_id": 15,
"text": "stabilizes at i (equivalently, Z(G) = Z(G)) if and only if Gi is centerless.",
"title": "Higher centers"
}
] | In abstract algebra, the center of a group, G, is the set of elements that commute with every element of G. It is denoted Z(G), from German Zentrum, meaning center. In set-builder notation, The center is a normal subgroup, Z(G) ⊲ G. As a subgroup, it is always characteristic, but is not necessarily fully characteristic. The quotient group, G / Z(G), is isomorphic to the inner automorphism group, Inn(G). A group G is abelian if and only if Z(G) = G. At the other extreme, a group is said to be centerless if Z(G) is trivial; i.e., consists only of the identity element. The elements of the center are sometimes called central. | 2001-11-16T16:58:22Z | 2023-10-29T08:13:54Z | [
"Template:Short description",
"Template:Use American English",
"Template:Redirect",
"Template:Math",
"Template:Cite book",
"Template:Use mdy dates",
"Template:Reflist",
"Template:Springer",
"Template:Cite journal"
] | https://en.wikipedia.org/wiki/Center_(group_theory) |
7,129 | Commonwealth of England | The Commonwealth was the political structure during the period from 1649 to 1660 when England and Wales, later along with Ireland and Scotland, were governed as a republic after the end of the Second English Civil War and the trial and execution of Charles I. The republic's existence was declared through "An Act declaring England to be a Commonwealth", adopted by the Rump Parliament on 19 May 1649. Power in the early Commonwealth was vested primarily in the Parliament and a Council of State. During the period, fighting continued, particularly in Ireland and Scotland, between the parliamentary forces and those opposed to them, in the Cromwellian conquest of Ireland and the Anglo-Scottish war of 1650–1652.
In 1653, after dissolution of the Rump Parliament, the Army Council adopted the Instrument of Government which made Oliver Cromwell Lord Protector of a united "Commonwealth of England, Scotland and Ireland", inaugurating the period now usually known as the Protectorate. After Cromwell's death, and following a brief period of rule under his son, Richard Cromwell, the Protectorate Parliament was dissolved in 1659 and the Rump Parliament recalled, starting a process that led to the restoration of the monarchy in 1660. The term Commonwealth is sometimes used for the whole of 1649 to 1660 – called by some the Interregnum – although for other historians, the use of the term is limited to the years prior to Cromwell's formal assumption of power in 1653.
In retrospect, the period of republican rule for England was a failure in the short term. During the 11-year period, no stable government was established to rule the English state for longer than a few months at a time. Several administrative structures were tried, and several Parliaments called and seated, but little in the way of meaningful, lasting legislation was passed. The only force keeping it together was the personality of Oliver Cromwell, who exerted control through the military by way of the "Grandees", being the Major-Generals and other senior military leaders of the New Model Army. Not only did Cromwell's regime crumble into near anarchy upon his death and the brief administration of his son, but the monarchy he overthrew was restored in 1660, and its first act was officially to erase all traces of any constitutional reforms of the Republican period. Still, the memory of the Parliamentarian cause, dubbed the Good Old Cause by the soldiers of the New Model Army, lingered on. It would carry through English politics and eventually result in a constitutional monarchy.
The Commonwealth period is better remembered for the military success of Thomas Fairfax, Oliver Cromwell, and the New Model Army. Besides resounding victories in the English Civil War, the reformed Navy under the command of Robert Blake defeated the Dutch in the First Anglo-Dutch War which marked the first step towards England's naval supremacy. In Ireland, the Commonwealth period is remembered for Cromwell's brutal subjugation of the Irish, which continued the policies of the Tudor and Stuart periods.
The Rump was created by Pride's Purge of those members of the Long Parliament who did not support the political position of the Grandees in the New Model Army. Just before and after the execution of King Charles I on 30 January 1649, the Rump passed a number of acts of Parliament creating the legal basis for the republic. With the abolition of the monarchy, Privy Council and the House of Lords, it had unchecked executive and legislative power. The English Council of State, which replaced the Privy Council, took over many of the executive functions of the monarchy. It was selected by the Rump, and most of its members were MPs. However, the Rump depended on the support of the Army with which it had a very uneasy relationship. After the execution of Charles I, the House of Commons abolished the monarchy and the House of Lords. It declared the people of England "and of all the Dominions and Territories thereunto belonging" to be henceforth under the governance of a "Commonwealth", effectively a republic.
In Pride's Purge, all members of parliament (including most of the political Presbyterians) who would not accept the need to bring the King to trial had been removed. Thus the Rump never had more than two hundred members (less than half the number of the Commons in the original Long Parliament). They included: supporters of religious independents who did not want an established church and some of whom had sympathies with the Levellers; Presbyterians who were willing to countenance the trial and execution of the King; and later admissions, such as formerly excluded MPs who were prepared to denounce the Newport Treaty negotiations with the King.
Most Rumpers were gentry, though there was a higher proportion of lesser gentry and lawyers than in previous parliaments. Less than one-quarter of them were regicides. This left the Rump as basically a conservative body whose vested interests in the existing land ownership and legal systems made it unlikely to want to reform them.
For the first two years of the Commonwealth, the Rump faced economic depression and the risk of invasion from Scotland and Ireland. By 1653 Cromwell and the Army had largely eliminated these threats.
There were many disagreements amongst factions of the Rump. Some wanted a republic, but others favoured retaining some type of monarchical government. Most of England's traditional ruling classes regarded the Rump as an illegal government made up of regicides and upstarts. However, they were also aware that the Rump might be all that stood in the way of an outright military dictatorship. High taxes, mainly to pay the Army, were resented by the gentry. Limited reforms were enough to antagonise the ruling class but not enough to satisfy the radicals.
Despite its unpopularity, the Rump was a link with the old constitution and helped to settle England down and make it secure after the biggest upheaval in its history. By 1653, France and Spain had recognised England's new government.
Though the Church of England was retained, episcopacy was suppressed and the Act of Uniformity 1558 was repealed in September 1650. Mainly on the insistence of the Army, many independent churches were tolerated, although everyone still had to pay tithes to the established church.
Some small improvements were made to law and court procedure; for example, all court proceedings were now conducted in English rather than in Law French or Latin. However, there were no widespread reforms of the common law. This would have upset the gentry, who regarded the common law as reinforcing their status and property rights.
The Rump passed many restrictive laws to regulate people's moral behaviour, such as closing down theatres and requiring strict observance of Sunday. Laws were also passed banning the celebration of Easter and Christmas. This antagonised most of the gentry.
Cromwell, aided by Thomas Harrison, forcibly dismissed the Rump on 20 April 1653, for reasons that are unclear. Theories are that he feared the Rump was trying to perpetuate itself as the government, or that the Rump was preparing for an election which could return an anti-Commonwealth majority. Many former members of the Rump continued to regard themselves as England's only legitimate constitutional authority. The Rump had not agreed to its own dissolution; their legal, constitutional view that it was unlawful was based on Charles' concessionary Act prohibiting the dissolution of Parliament without its own consent (on 11 May 1641, leading to the entire Commonwealth being the latter years of the Long Parliament in their majority view).
The dissolution of the Rump was followed by a short period in which Cromwell and the Army ruled alone. Nobody had the constitutional authority to call an election, but Cromwell did not want to impose a military dictatorship. Instead, he ruled through a "nominated assembly" which he believed would be easy for the Army to control since Army officers did the nominating.
Barebone's Parliament was opposed by former Rumpers and ridiculed by many gentries as being an assembly of inferior people. Over 110 of its 140 members were lesser gentry or of higher social status; an exception was Praise-God Barebone, a Baptist merchant after whom the Assembly got its derogatory nickname. Many were well educated.
The assembly reflected the range of views of the officers who nominated it. The Radicals (approximately 40) included a hard core of Fifth Monarchists who wanted to be rid of Common Law and any state control of religion. The Moderates (approximately 60) wanted some improvements within the existing system and might move to either the radical or conservative side depending on the issue. The Conservatives (approximately 40) wanted to keep the status quo, since common law protected the interests of the gentry, and tithes and advowsons were valuable property.
Cromwell saw Barebone's Parliament as a temporary legislative body which he hoped would produce reforms and develop a constitution for the Commonwealth. However, members were divided over key issues, only 25 had previous parliamentary experience, and although many had some legal training, there were no qualified lawyers.
Cromwell seems to have expected this group of amateurs to produce reform without management or direction. When the radicals mustered enough support to defeat a bill which would have preserved the status quo in religion, the conservatives, together with many moderates, surrendered their authority back to Cromwell, who sent soldiers to clear the rest of the Assembly. Barebone's Parliament was over.
Throughout 1653, Cromwell and the Army slowly dismantled the machinery of the Commonwealth state. The English Council of State, which had assumed the executive function formerly held by the King and his Privy Council, was forcibly dissolved by Cromwell on 20 April, and in its place a new council, filled with Cromwell's own chosen men, was installed. Three days after Barebone's Parliament dissolved itself, the Instrument of Government was adopted by Cromwell's council and a new state structure, now known historically as The Protectorate, was given its shape. This new constitution granted Cromwell sweeping powers as Lord Protector, an office which ironically had much the same role and powers as the King had under the monarchy, a fact not lost on Cromwell's critics.
On 12 April 1654, under the terms of the Tender of Union, the Ordinance for uniting Scotland into one Commonwealth with England was issued by the Lord Protector and proclaimed in Scotland by the military governor of Scotland, General George Monck, 1st Duke of Albemarle. The ordinance declared that "the people of Scotland should be united with the people of England into one Commonwealth and under one Government" and decreed that a new "Arms of the Commonwealth", incorporating the Saltire, should be placed on "all the public seals, seals of office, and seals of bodies civil or corporate, in Scotland" as "a badge of this Union".
Cromwell and his Council of State spent the first several months of 1654 preparing for the First Protectorate Parliament by drawing up a set of 84 bills for consideration. The Parliament was freely elected (as free as such elections could be in the 17th century) and as such, the Parliament was filled with a wide range of political interests, and as such did not accomplish any of its goals; it was dissolved as soon as law would allow by Cromwell having passed none of Cromwell's proposed bills.
Having decided that Parliament was not an efficient means of getting his policies enacted, Cromwell instituted a system of direct military rule of England during a period known as the Rule of the Major-Generals; all of England was divided into ten regions, each was governed directly by one of Cromwell's Major-Generals, who were given sweeping powers to collect taxes and enforce the peace. The Major-Generals were highly unpopular, a fact that they themselves noticed and many urged Cromwell to call another Parliament to give his rule legitimacy.
Unlike the prior Parliament, which had been open to all eligible males in the Commonwealth, the new elections specifically excluded Catholics and Royalists from running or voting; as a result, it was stocked with members who were more in line with Cromwell's own politics. The first major bill to be brought up for debate was the Militia Bill, which was ultimately voted down by the House. As a result, the authority of the Major-Generals to collect taxes to support their own regimes ended, and the Rule of the Major Generals came to an end. The second piece of major legislation was the passage of the Humble Petition and Advice, a sweeping constitutional reform which had two purposes. The first was to reserve for Parliament certain rights, such as a three-year fixed-term (which the Lord Protector was required to abide by) and to reserve for the Parliament the sole right of taxation. The second, as a concession to Cromwell, was to make the Lord Protector a hereditary position and to convert the title to a formal constitutional Kingship. Cromwell refused the title of King, but accepted the rest of the legislation, which was passed in final form on 25 May 1657.
A second session of the Parliament met in 1658; it allowed previously excluded MPs (who had been not allowed to take their seats because of Catholic and/or Royalist leanings) to take their seats, however, this made the Parliament far less compliant to the wishes of Cromwell and the Major-Generals; it accomplished little in the way of a legislative agenda and was dissolved after a few months.
On the death of Oliver Cromwell in 1658, his son, Richard Cromwell, inherited the title, Lord Protector. Richard had never served in the Army, which meant he lost control over the Major-Generals that had been the source of his own father's power. The Third Protectorate Parliament was summoned in late 1658 and was seated on 27 January 1659. Its first act was to confirm Richard's role as Lord Protector, which it did by a sizeable, but not overwhelming, majority. Quickly, however, it became apparent that Richard had no control over the Army and divisions quickly developed in the Parliament. One faction called for a recall of the Rump Parliament and a return to the constitution of the Commonwealth, while another preferred the existing constitution. As the parties grew increasingly quarrelsome, Richard dissolved it. He was quickly removed from power, and the remaining Army leadership recalled the Rump Parliament, setting the stage for the return of the Monarchy a year later.
After the Grandees in the New Model Army removed Richard, they reinstalled the Rump Parliament in May 1659. Charles Fleetwood was appointed a member of the Committee of Safety and of the Council of State, and one of the seven commissioners for the army. On 9 June he was nominated lord-general (commander-in-chief) of the army. However, his power was undermined in parliament, which chose to disregard the army's authority in a similar fashion to the pre–Civil War parliament. On 12 October 1659 the Commons cashiered General John Lambert and other officers, and installed Fleetwood as chief of a military council under the authority of the Speaker. The next day Lambert ordered that the doors of the House be shut and the members kept out. On 26 October a "Committee of Safety" was appointed, of which Fleetwood and Lambert were members. Lambert was appointed major-general of all the forces in England and Scotland, Fleetwood being general. Lambert was now sent, by the Committee of Safety, with a large force to meet George Monck, who was in command of the English forces in Scotland, and either negotiate with him or force him to come to terms.
It was into this atmosphere that General George Monck marched south with his army from Scotland. Lambert's army began to desert him, and he returned to London almost alone. On 21 February 1660, Monck reinstated the Presbyterian members of the Long Parliament "secluded" by Pride, so that they could prepare legislation for a new parliament. Fleetwood was deprived of his command and ordered to appear before parliament to answer for his conduct. On 3 March Lambert was sent to the Tower, from which he escaped a month later. Lambert tried to rekindle the civil war in favour of the Commonwealth by issuing a proclamation calling on all supporters of the "Good Old Cause" to rally on the battlefield of Edgehill. However, he was recaptured by Colonel Richard Ingoldsby, a regicide who hoped to win a pardon by handing Lambert over to the new regime. The Long Parliament dissolved itself on 16 March.
On 4 April 1660, in response to a secret message sent by Monck, Charles II issued the Declaration of Breda, which made known the conditions of his acceptance of the crown of England. Monck organised the Convention Parliament, which met for the first time on 25 April. On 8 May it proclaimed that King Charles II had been the lawful monarch since the execution of Charles I in January 1649. Charles returned from exile on 23 May. He entered London on 29 May, his birthday. To celebrate "his Majesty's Return to his Parliament" 29 May was made a public holiday, popularly known as Oak Apple Day. He was crowned at Westminster Abbey on 23 April 1661. | [
{
"paragraph_id": 0,
"text": "The Commonwealth was the political structure during the period from 1649 to 1660 when England and Wales, later along with Ireland and Scotland, were governed as a republic after the end of the Second English Civil War and the trial and execution of Charles I. The republic's existence was declared through \"An Act declaring England to be a Commonwealth\", adopted by the Rump Parliament on 19 May 1649. Power in the early Commonwealth was vested primarily in the Parliament and a Council of State. During the period, fighting continued, particularly in Ireland and Scotland, between the parliamentary forces and those opposed to them, in the Cromwellian conquest of Ireland and the Anglo-Scottish war of 1650–1652.",
"title": ""
},
{
"paragraph_id": 1,
"text": "In 1653, after dissolution of the Rump Parliament, the Army Council adopted the Instrument of Government which made Oliver Cromwell Lord Protector of a united \"Commonwealth of England, Scotland and Ireland\", inaugurating the period now usually known as the Protectorate. After Cromwell's death, and following a brief period of rule under his son, Richard Cromwell, the Protectorate Parliament was dissolved in 1659 and the Rump Parliament recalled, starting a process that led to the restoration of the monarchy in 1660. The term Commonwealth is sometimes used for the whole of 1649 to 1660 – called by some the Interregnum – although for other historians, the use of the term is limited to the years prior to Cromwell's formal assumption of power in 1653.",
"title": ""
},
{
"paragraph_id": 2,
"text": "In retrospect, the period of republican rule for England was a failure in the short term. During the 11-year period, no stable government was established to rule the English state for longer than a few months at a time. Several administrative structures were tried, and several Parliaments called and seated, but little in the way of meaningful, lasting legislation was passed. The only force keeping it together was the personality of Oliver Cromwell, who exerted control through the military by way of the \"Grandees\", being the Major-Generals and other senior military leaders of the New Model Army. Not only did Cromwell's regime crumble into near anarchy upon his death and the brief administration of his son, but the monarchy he overthrew was restored in 1660, and its first act was officially to erase all traces of any constitutional reforms of the Republican period. Still, the memory of the Parliamentarian cause, dubbed the Good Old Cause by the soldiers of the New Model Army, lingered on. It would carry through English politics and eventually result in a constitutional monarchy.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The Commonwealth period is better remembered for the military success of Thomas Fairfax, Oliver Cromwell, and the New Model Army. Besides resounding victories in the English Civil War, the reformed Navy under the command of Robert Blake defeated the Dutch in the First Anglo-Dutch War which marked the first step towards England's naval supremacy. In Ireland, the Commonwealth period is remembered for Cromwell's brutal subjugation of the Irish, which continued the policies of the Tudor and Stuart periods.",
"title": ""
},
{
"paragraph_id": 4,
"text": "The Rump was created by Pride's Purge of those members of the Long Parliament who did not support the political position of the Grandees in the New Model Army. Just before and after the execution of King Charles I on 30 January 1649, the Rump passed a number of acts of Parliament creating the legal basis for the republic. With the abolition of the monarchy, Privy Council and the House of Lords, it had unchecked executive and legislative power. The English Council of State, which replaced the Privy Council, took over many of the executive functions of the monarchy. It was selected by the Rump, and most of its members were MPs. However, the Rump depended on the support of the Army with which it had a very uneasy relationship. After the execution of Charles I, the House of Commons abolished the monarchy and the House of Lords. It declared the people of England \"and of all the Dominions and Territories thereunto belonging\" to be henceforth under the governance of a \"Commonwealth\", effectively a republic.",
"title": "1649–1653"
},
{
"paragraph_id": 5,
"text": "In Pride's Purge, all members of parliament (including most of the political Presbyterians) who would not accept the need to bring the King to trial had been removed. Thus the Rump never had more than two hundred members (less than half the number of the Commons in the original Long Parliament). They included: supporters of religious independents who did not want an established church and some of whom had sympathies with the Levellers; Presbyterians who were willing to countenance the trial and execution of the King; and later admissions, such as formerly excluded MPs who were prepared to denounce the Newport Treaty negotiations with the King.",
"title": "1649–1653"
},
{
"paragraph_id": 6,
"text": "Most Rumpers were gentry, though there was a higher proportion of lesser gentry and lawyers than in previous parliaments. Less than one-quarter of them were regicides. This left the Rump as basically a conservative body whose vested interests in the existing land ownership and legal systems made it unlikely to want to reform them.",
"title": "1649–1653"
},
{
"paragraph_id": 7,
"text": "For the first two years of the Commonwealth, the Rump faced economic depression and the risk of invasion from Scotland and Ireland. By 1653 Cromwell and the Army had largely eliminated these threats.",
"title": "1649–1653"
},
{
"paragraph_id": 8,
"text": "There were many disagreements amongst factions of the Rump. Some wanted a republic, but others favoured retaining some type of monarchical government. Most of England's traditional ruling classes regarded the Rump as an illegal government made up of regicides and upstarts. However, they were also aware that the Rump might be all that stood in the way of an outright military dictatorship. High taxes, mainly to pay the Army, were resented by the gentry. Limited reforms were enough to antagonise the ruling class but not enough to satisfy the radicals.",
"title": "1649–1653"
},
{
"paragraph_id": 9,
"text": "Despite its unpopularity, the Rump was a link with the old constitution and helped to settle England down and make it secure after the biggest upheaval in its history. By 1653, France and Spain had recognised England's new government.",
"title": "1649–1653"
},
{
"paragraph_id": 10,
"text": "Though the Church of England was retained, episcopacy was suppressed and the Act of Uniformity 1558 was repealed in September 1650. Mainly on the insistence of the Army, many independent churches were tolerated, although everyone still had to pay tithes to the established church.",
"title": "1649–1653"
},
{
"paragraph_id": 11,
"text": "Some small improvements were made to law and court procedure; for example, all court proceedings were now conducted in English rather than in Law French or Latin. However, there were no widespread reforms of the common law. This would have upset the gentry, who regarded the common law as reinforcing their status and property rights.",
"title": "1649–1653"
},
{
"paragraph_id": 12,
"text": "The Rump passed many restrictive laws to regulate people's moral behaviour, such as closing down theatres and requiring strict observance of Sunday. Laws were also passed banning the celebration of Easter and Christmas. This antagonised most of the gentry.",
"title": "1649–1653"
},
{
"paragraph_id": 13,
"text": "Cromwell, aided by Thomas Harrison, forcibly dismissed the Rump on 20 April 1653, for reasons that are unclear. Theories are that he feared the Rump was trying to perpetuate itself as the government, or that the Rump was preparing for an election which could return an anti-Commonwealth majority. Many former members of the Rump continued to regard themselves as England's only legitimate constitutional authority. The Rump had not agreed to its own dissolution; their legal, constitutional view that it was unlawful was based on Charles' concessionary Act prohibiting the dissolution of Parliament without its own consent (on 11 May 1641, leading to the entire Commonwealth being the latter years of the Long Parliament in their majority view).",
"title": "1649–1653"
},
{
"paragraph_id": 14,
"text": "The dissolution of the Rump was followed by a short period in which Cromwell and the Army ruled alone. Nobody had the constitutional authority to call an election, but Cromwell did not want to impose a military dictatorship. Instead, he ruled through a \"nominated assembly\" which he believed would be easy for the Army to control since Army officers did the nominating.",
"title": "1649–1653"
},
{
"paragraph_id": 15,
"text": "Barebone's Parliament was opposed by former Rumpers and ridiculed by many gentries as being an assembly of inferior people. Over 110 of its 140 members were lesser gentry or of higher social status; an exception was Praise-God Barebone, a Baptist merchant after whom the Assembly got its derogatory nickname. Many were well educated.",
"title": "1649–1653"
},
{
"paragraph_id": 16,
"text": "The assembly reflected the range of views of the officers who nominated it. The Radicals (approximately 40) included a hard core of Fifth Monarchists who wanted to be rid of Common Law and any state control of religion. The Moderates (approximately 60) wanted some improvements within the existing system and might move to either the radical or conservative side depending on the issue. The Conservatives (approximately 40) wanted to keep the status quo, since common law protected the interests of the gentry, and tithes and advowsons were valuable property.",
"title": "1649–1653"
},
{
"paragraph_id": 17,
"text": "Cromwell saw Barebone's Parliament as a temporary legislative body which he hoped would produce reforms and develop a constitution for the Commonwealth. However, members were divided over key issues, only 25 had previous parliamentary experience, and although many had some legal training, there were no qualified lawyers.",
"title": "1649–1653"
},
{
"paragraph_id": 18,
"text": "Cromwell seems to have expected this group of amateurs to produce reform without management or direction. When the radicals mustered enough support to defeat a bill which would have preserved the status quo in religion, the conservatives, together with many moderates, surrendered their authority back to Cromwell, who sent soldiers to clear the rest of the Assembly. Barebone's Parliament was over.",
"title": "1649–1653"
},
{
"paragraph_id": 19,
"text": "Throughout 1653, Cromwell and the Army slowly dismantled the machinery of the Commonwealth state. The English Council of State, which had assumed the executive function formerly held by the King and his Privy Council, was forcibly dissolved by Cromwell on 20 April, and in its place a new council, filled with Cromwell's own chosen men, was installed. Three days after Barebone's Parliament dissolved itself, the Instrument of Government was adopted by Cromwell's council and a new state structure, now known historically as The Protectorate, was given its shape. This new constitution granted Cromwell sweeping powers as Lord Protector, an office which ironically had much the same role and powers as the King had under the monarchy, a fact not lost on Cromwell's critics.",
"title": "The Protectorate, 1653–1659"
},
{
"paragraph_id": 20,
"text": "On 12 April 1654, under the terms of the Tender of Union, the Ordinance for uniting Scotland into one Commonwealth with England was issued by the Lord Protector and proclaimed in Scotland by the military governor of Scotland, General George Monck, 1st Duke of Albemarle. The ordinance declared that \"the people of Scotland should be united with the people of England into one Commonwealth and under one Government\" and decreed that a new \"Arms of the Commonwealth\", incorporating the Saltire, should be placed on \"all the public seals, seals of office, and seals of bodies civil or corporate, in Scotland\" as \"a badge of this Union\".",
"title": "The Protectorate, 1653–1659"
},
{
"paragraph_id": 21,
"text": "Cromwell and his Council of State spent the first several months of 1654 preparing for the First Protectorate Parliament by drawing up a set of 84 bills for consideration. The Parliament was freely elected (as free as such elections could be in the 17th century) and as such, the Parliament was filled with a wide range of political interests, and as such did not accomplish any of its goals; it was dissolved as soon as law would allow by Cromwell having passed none of Cromwell's proposed bills.",
"title": "The Protectorate, 1653–1659"
},
{
"paragraph_id": 22,
"text": "Having decided that Parliament was not an efficient means of getting his policies enacted, Cromwell instituted a system of direct military rule of England during a period known as the Rule of the Major-Generals; all of England was divided into ten regions, each was governed directly by one of Cromwell's Major-Generals, who were given sweeping powers to collect taxes and enforce the peace. The Major-Generals were highly unpopular, a fact that they themselves noticed and many urged Cromwell to call another Parliament to give his rule legitimacy.",
"title": "The Protectorate, 1653–1659"
},
{
"paragraph_id": 23,
"text": "Unlike the prior Parliament, which had been open to all eligible males in the Commonwealth, the new elections specifically excluded Catholics and Royalists from running or voting; as a result, it was stocked with members who were more in line with Cromwell's own politics. The first major bill to be brought up for debate was the Militia Bill, which was ultimately voted down by the House. As a result, the authority of the Major-Generals to collect taxes to support their own regimes ended, and the Rule of the Major Generals came to an end. The second piece of major legislation was the passage of the Humble Petition and Advice, a sweeping constitutional reform which had two purposes. The first was to reserve for Parliament certain rights, such as a three-year fixed-term (which the Lord Protector was required to abide by) and to reserve for the Parliament the sole right of taxation. The second, as a concession to Cromwell, was to make the Lord Protector a hereditary position and to convert the title to a formal constitutional Kingship. Cromwell refused the title of King, but accepted the rest of the legislation, which was passed in final form on 25 May 1657.",
"title": "The Protectorate, 1653–1659"
},
{
"paragraph_id": 24,
"text": "A second session of the Parliament met in 1658; it allowed previously excluded MPs (who had been not allowed to take their seats because of Catholic and/or Royalist leanings) to take their seats, however, this made the Parliament far less compliant to the wishes of Cromwell and the Major-Generals; it accomplished little in the way of a legislative agenda and was dissolved after a few months.",
"title": "The Protectorate, 1653–1659"
},
{
"paragraph_id": 25,
"text": "On the death of Oliver Cromwell in 1658, his son, Richard Cromwell, inherited the title, Lord Protector. Richard had never served in the Army, which meant he lost control over the Major-Generals that had been the source of his own father's power. The Third Protectorate Parliament was summoned in late 1658 and was seated on 27 January 1659. Its first act was to confirm Richard's role as Lord Protector, which it did by a sizeable, but not overwhelming, majority. Quickly, however, it became apparent that Richard had no control over the Army and divisions quickly developed in the Parliament. One faction called for a recall of the Rump Parliament and a return to the constitution of the Commonwealth, while another preferred the existing constitution. As the parties grew increasingly quarrelsome, Richard dissolved it. He was quickly removed from power, and the remaining Army leadership recalled the Rump Parliament, setting the stage for the return of the Monarchy a year later.",
"title": "The Protectorate, 1653–1659"
},
{
"paragraph_id": 26,
"text": "",
"title": "The Protectorate, 1653–1659"
},
{
"paragraph_id": 27,
"text": "After the Grandees in the New Model Army removed Richard, they reinstalled the Rump Parliament in May 1659. Charles Fleetwood was appointed a member of the Committee of Safety and of the Council of State, and one of the seven commissioners for the army. On 9 June he was nominated lord-general (commander-in-chief) of the army. However, his power was undermined in parliament, which chose to disregard the army's authority in a similar fashion to the pre–Civil War parliament. On 12 October 1659 the Commons cashiered General John Lambert and other officers, and installed Fleetwood as chief of a military council under the authority of the Speaker. The next day Lambert ordered that the doors of the House be shut and the members kept out. On 26 October a \"Committee of Safety\" was appointed, of which Fleetwood and Lambert were members. Lambert was appointed major-general of all the forces in England and Scotland, Fleetwood being general. Lambert was now sent, by the Committee of Safety, with a large force to meet George Monck, who was in command of the English forces in Scotland, and either negotiate with him or force him to come to terms.",
"title": "1659–1660"
},
{
"paragraph_id": 28,
"text": "It was into this atmosphere that General George Monck marched south with his army from Scotland. Lambert's army began to desert him, and he returned to London almost alone. On 21 February 1660, Monck reinstated the Presbyterian members of the Long Parliament \"secluded\" by Pride, so that they could prepare legislation for a new parliament. Fleetwood was deprived of his command and ordered to appear before parliament to answer for his conduct. On 3 March Lambert was sent to the Tower, from which he escaped a month later. Lambert tried to rekindle the civil war in favour of the Commonwealth by issuing a proclamation calling on all supporters of the \"Good Old Cause\" to rally on the battlefield of Edgehill. However, he was recaptured by Colonel Richard Ingoldsby, a regicide who hoped to win a pardon by handing Lambert over to the new regime. The Long Parliament dissolved itself on 16 March.",
"title": "1659–1660"
},
{
"paragraph_id": 29,
"text": "On 4 April 1660, in response to a secret message sent by Monck, Charles II issued the Declaration of Breda, which made known the conditions of his acceptance of the crown of England. Monck organised the Convention Parliament, which met for the first time on 25 April. On 8 May it proclaimed that King Charles II had been the lawful monarch since the execution of Charles I in January 1649. Charles returned from exile on 23 May. He entered London on 29 May, his birthday. To celebrate \"his Majesty's Return to his Parliament\" 29 May was made a public holiday, popularly known as Oak Apple Day. He was crowned at Westminster Abbey on 23 April 1661.",
"title": "1659–1660"
}
] | The Commonwealth was the political structure during the period from 1649 to 1660 when England and Wales, later along with Ireland and Scotland, were governed as a republic after the end of the Second English Civil War and the trial and execution of Charles I. The republic's existence was declared through "An Act declaring England to be a Commonwealth", adopted by the Rump Parliament on 19 May 1649. Power in the early Commonwealth was vested primarily in the Parliament and a Council of State. During the period, fighting continued, particularly in Ireland and Scotland, between the parliamentary forces and those opposed to them, in the Cromwellian conquest of Ireland and the Anglo-Scottish war of 1650–1652. In 1653, after dissolution of the Rump Parliament, the Army Council adopted the Instrument of Government which made Oliver Cromwell Lord Protector of a united "Commonwealth of England, Scotland and Ireland", inaugurating the period now usually known as the Protectorate. After Cromwell's death, and following a brief period of rule under his son, Richard Cromwell, the Protectorate Parliament was dissolved in 1659 and the Rump Parliament recalled, starting a process that led to the restoration of the monarchy in 1660. The term Commonwealth is sometimes used for the whole of 1649 to 1660 – called by some the Interregnum – although for other historians, the use of the term is limited to the years prior to Cromwell's formal assumption of power in 1653. In retrospect, the period of republican rule for England was a failure in the short term. During the 11-year period, no stable government was established to rule the English state for longer than a few months at a time. Several administrative structures were tried, and several Parliaments called and seated, but little in the way of meaningful, lasting legislation was passed. The only force keeping it together was the personality of Oliver Cromwell, who exerted control through the military by way of the "Grandees", being the Major-Generals and other senior military leaders of the New Model Army. Not only did Cromwell's regime crumble into near anarchy upon his death and the brief administration of his son, but the monarchy he overthrew was restored in 1660, and its first act was officially to erase all traces of any constitutional reforms of the Republican period. Still, the memory of the Parliamentarian cause, dubbed the Good Old Cause by the soldiers of the New Model Army, lingered on. It would carry through English politics and eventually result in a constitutional monarchy. The Commonwealth period is better remembered for the military success of Thomas Fairfax, Oliver Cromwell, and the New Model Army. Besides resounding victories in the English Civil War, the reformed Navy under the command of Robert Blake defeated the Dutch in the First Anglo-Dutch War which marked the first step towards England's naval supremacy. In Ireland, the Commonwealth period is remembered for Cromwell's brutal subjugation of the Irish, which continued the policies of the Tudor and Stuart periods. | 2001-11-17T00:14:57Z | 2023-11-14T16:39:42Z | [
"Template:S-bef",
"Template:S-end",
"Template:Use British English",
"Template:Use dmy dates",
"Template:Infobox historical era",
"Template:Citation",
"Template:S-start",
"Template:S-aft",
"Template:English monarchs",
"Template:Short description",
"Template:Distinguish",
"Template:Main",
"Template:Wikisource",
"Template:Reflist",
"Template:Anchor",
"Template:Harv",
"Template:S-ttl",
"Template:History of England",
"Template:Authority control",
"Template:Sfn",
"Template:Citation needed",
"Template:Infobox country",
"Template:See also",
"Template:Cite web"
] | https://en.wikipedia.org/wiki/Commonwealth_of_England |
7,131 | Charles Evers | James Charles Evers (September 11, 1922 – July 22, 2020) was an American civil rights activist, businessman, radio personality, and politician. Evers was known for his role in the civil rights movement along with his younger brother Medgar Evers. After serving in World War II, Evers began his career as a disc jockey at WHOC in Philadelphia, Mississippi. In 1954, he was made the National Association for the Advancement of Colored People (NAACP) State Voter Registration chairman. After his brother's assassination in 1963, Evers took over his position as field director of the NAACP in Mississippi. In this role, he organized and led many demonstrations for the rights of African Americans.
In 1969, Evers was named "Man of the Year" by the NAACP. On June 3, 1969, Evers was elected in Fayette, Mississippi, as the first African-American mayor of a biracial town in Mississippi since the Reconstruction era, following passage of the Voting Rights Act of 1965 which enforced constitutional rights for citizens.
At the time of Evers's election as mayor, the town of Fayette had a population of 1,600 of which 75% was African-American and almost 25% white; the white officers on the Fayette city police "resigned rather than work under a black administration," according to the Associated Press. Evers told reporters "I guess we will just have to operate with an all-black police department for the present. But I am still looking for some whites to join us in helping Fayette grow." Evers then outlawed the carrying of firearms within city limits.
He ran for governor in 1971 and the United States Senate in 1978, both times as an independent candidate. In 1989, Evers was defeated for re-election after serving sixteen years as mayor. In his later life, he became a Republican, endorsing Ronald Reagan in 1980, and more recently Donald Trump in 2016. This diversity in party affiliations throughout his life was reflected in his fostering of friendships with people from a variety of backgrounds, as well as his advising of politicians from across the political spectrum. After his political career ended, he returned to radio and hosted his own show, Let's Talk. In 2017, Evers was inducted into the National Rhythm & Blues Hall of Fame for his contributions to the music industry.
Charles Evers was born in Decatur, Mississippi, on September 11, 1922, to James Evers, a laborer, and Jesse Wright Evers, a maid. He was the eldest of four children; Medgar Evers was his younger brother. He attended segregated public schools, which were typically underfunded in Mississippi following the exclusion of African Americans from the political system by disenfranchisement after 1890. Evers graduated from Alcorn State University in Lorman, Mississippi.
During World War II, Charles and Medgar Evers both served in the United States Army. Charles fell in love with a Philippine woman while stationed overseas. He could not marry her and bring her home to Mississippi because the state's constitution prohibited interracial marriages.
During the war he established a brothel in Quezon City which catered to American servicemen. After serving a year of reserve duty following the Korean War, he settled in Philadelphia, Mississippi. In 1949, he began working as a disc jockey at WHOC, making him the first black disc jockey in the state. By the early 1950s, he was managing a hotel, cab company, and burial insurance business in the town. He had a cafe in Philadelphia and influenced over two hundred black citizens to pay their poll tax. Forced to leave due to local white hostility in 1956, he moved to Chicago. Low on money, he began working as a meatpacker in stockyards during the day and as an attendant for the men's restroom at the Conrad Hilton Hotel at nights. He also began pimping and ran a numbers game, taking $500 a week from the latter. He gained enough money to purchase several bars, bootlegged liquor, and sold jukeboxes.
In Mississippi about 1951, brothers Charles and Medgar Evers grew interested in African freedom movements. They were interested in Jomo Kenyatta and the rise of the Kikuyu tribal resistance to colonialism in Kenya, known as the Mau Mau uprising as it moved to open violence. Along with his brother, Charles became active in the Regional Council of Negro Leadership (RCNL), a civil rights organization that promoted self-help and business ownership. He also helped his brother with black voter registration drives. Between 1952 and 1955, Evers often spoke at the RCNL's annual conferences in Mound Bayou, a town founded by freedmen, on such issues as voting rights. His brother Medgar continued to be involved in civil rights, becoming field secretary and head of the National Association for the Advancement of Colored People (NAACP) in Mississippi. While working in Chicago he sent money to him, not specifying the source.
On June 12, 1963, Byron De La Beckwith, a member of a Ku Klux Klan chapter, fatally shot Evers's brother, Medgar, in Mississippi as he arrived home from work. Medgar died at the hospital in Jackson. Charles learned of his brother's death several hours later and flew to Jackson in the morning. Deeply upset by the assassination, he heavily involved himself in the planning of his brother's funeral. He decided to relocate to Mississippi to carry on his brother's work. Journalist Jason Berry, who later worked for Charles, said, "I think he wanted to be a better person. I think Medgar's death was a cathartic experience." A decade after his death, Evers and blues musician B.B. King created the Medgar Evers Homecoming Festival, an annual three-day event held the first week of June in Mississippi.
Over the opposition of more establishment figures in the National Association for the Advancement of Colored People (NAACP) such as Roy Wilkins, Evers took over his brother's post as head of the NAACP in Mississippi. Wilkins never managed a friendly relationship with Evers, and Medgar's widow, Myrlie, also disapproved of Charles' replacing him. A staunch believer in racial integration, he distrusted what he viewed as the militancy and separatism of the Student Nonviolent Coordinating Committee and the Mississippi Freedom Democratic Party, a black-dominated breakaway of the segregationist Mississippi Democratic Party. In 1965 he launched a series of successful black boycotts in southwestern Mississippi which partnered with the Natchez Deacons for Defense and Justice, which won concessions from the Natchez authorities and ratified his unconventional boycott methods. Often accompanied by a group of 65 male followers, he would pressure local blacks in small towns to avoid stores under boycott and directly challenge white business leaders. He also led a voter registration campaign. He coordinated his efforts from the small town of Fayette in Jefferson County. Fayette was a small, economically depressed town of about 2,500 people. About three-fourths of the population was black, and they had long been socially and economically subordinate to the white minority. Evers moved the NAACP's Mississippi field office from Jackson to Fayette to take advantage of the potential of the black majority and achieve political influence in Jefferson and two adjacent counties. He explained, "My feeling is that Negroes gotta control somewhere in America, and we've dropped anchor in these counties. We are going to control these three counties in the next ten years. There is no question about it."
With his voter registration drives having made Fayette's number of black registered voters double the size of the white electorate, Evers helped elect a black man to the local school board in 1966. He also established the Medgar Evers Community Center at the outskirts of town, which served as a center for registration efforts, grocery store, restaurant, and dance hall. By early 1968 he had established a network of local NAACP branches in the region. The president of each branch served as Evers' deputies, and he attended all of their meetings. That year he made a bid for the open seat of the 3rd congressional district in the U.S. House of Representatives, facing six white opponents in the Democratic primary. Though low on funds, he led in the primary with a plurality of the votes. The Mississippi Legislature responded by passing a law mandating a runoff primary in the event of no absolute majority in the initial contest, which Evers lost. He also supported Robert F. Kennedy's 1968 presidential campaign, serving as co-director of his Mississippi campaign organization, and was with Kennedy in Los Angeles when he was assassinated.
In May 1969, Evers ran for the office of Mayor of Fayette and defeated white incumbent R. J. Allen, 386 votes to 255. This made him the first black mayor of a biracial Mississippi town (unlike the all-black Mound Bayou) since Reconstruction. Evers' election as mayor had great symbolic significance statewide and attracted national attention. The NAACP named Evers their 1969 Man of the Year. Evers popularized the slogan, "Hands that picked cotton can now pick the mayor." The local white community was bitter about his victory, but he became intensively popular among Mississippi's blacks. To celebrate his victory, he hosted an inaugural ball in Natchez, which was widely attended by black Mississippians, reporters from around the country, and prominent national liberals including Ramsey Clark, Ted Sorenson, Whitney Young, Julian Bond, Shirley MacLaine, and Paul O'Dwyer. The white-dominated school board refused to let Evers swear-in on property under their jurisdiction, so he took his oath of office in a parking lot.
Evers appointed a black police force and several black staff members. He also benefitted from an influx of young, white liberal volunteers who wanted to assist a civil rights leader. Many ended up leaving after growing disillusioned with Evers' pursuit of personal financial success and domineering leadership style. Evers sought to make Fayette an upstanding community and a symbolic refuge for black people. Repulsed by the behavior of poor blacks in the town, he ordered the police force to enforce a 25-mile per hour speed limit on local roads, banned cursing in public, and cracked down on truancy. He also prohibited the carrying of firearms in town but kept a gun on himself. He quickly responded to concerns from poor blacks while making white businessmen wait outside of his office. Rhetorically, he would vacillate between messages of racial conciliation and statements of hostility.
Fayette's white population remained bitter about Evers' victory. Many avoided the city hall where they used to socialize and The Fayette Chronicle regularly criticized him. He argued with the county board of supervisors over his plan to erect busts of his brother, Martin Luther King Jr., and the Kennedys on the courthouse square. He told the press, "They're cooperating because they haven't blown my head off. This is Mississippi." In September 1969, a Klansman drove into Fayette with a collection of weapons, intending to assassinate Evers. A white resident tipped off the mayor and the Klansman was arrested. The Klansman defended his motives by saying, "I am a Mississippi white man".
Evers' moralistic style began to create discontent; in early 1970, most of Fayette's police department resigned, saying the mayor had treated them "like dogs". Evers complained that local blacks were "jealous" of him. As the judge in the municipal court, he personally issued fines for infractions such as cursing in public. He regularly ignored the input of the town board of aldermen, and town employee Charles Ramberg reported that he said he would fire municipal workers who would not vote for him. During Evers' tenure, Fayette benefitted from several federal grants, and ITT Inc. built an assembly plant in the town, but the region's economy largely remained depressed. By 1981, Jefferson County had the highest unemployment rate in the state.
Whites' perception that Evers was venal and self-interested persisted and began to spread among the black community. This problem ballooned when in 1974 the Internal Revenue Service arranged for him to be indicted for tax evasion by failing to report $156,000 in income he garnered in the late 1960s. Prosecutors further accused him of depositing town funds in a personal bank account. His attorney told the court that Evers had indeed concealed the income, but argued that the charge was invalid since this had been done before the late 1960s, as the indictment specified. The case resulted in a mistrial, but Evers' reputation permanently suffered. In the late 1970s he used a $5,300 federal grant to renovate a building he owned which he leased to a federal day care program, and used some of the employees for personal business.
Evers served many terms as mayor of Fayette. Admired by some, he alienated others with his inflexible stands on various issues. Evers did not like to share or delegate power. Evers lost the Democratic primary for mayor in 1981 to Kennie Middleton. Four years later, Evers defeated Middleton in the primaries and won back the office of mayor. In 1989, Evers lost the nomination once again to political rival Kennie Middleton. In his response to the defeat, Evers accepted, said he was tired, and that: "Twenty years is enough. I'm tired of being out front. Let someone else be out front."
Evers began mulling the possibility of a campaign for the office of governor in 1969. He decided to enter the 1971 gubernatorial election as an independent, kicking off his campaign with a rally in Decatur. He later explained his reason for launching the bid, saying, "I ran for governor because if someone doesn't start running, there will never be a black man or a black woman governor of the state of Mississippi." He endorsed white segregationist Jimmy Swan in the Democratic primary, reasoning that if Swan won the nomination, moderate whites would be more inclined to vote for himself in the general election. He campaigned on a platform of reduced taxes—particularly for lower property taxes on the elderly, improved healthcare, and legalizing gambling along the Gulf Coast. Low on money, his candidacy was largely funded by the sale of campaign buttons and copies of his recently published autobiography. His campaign staff was largely young and inexperienced and lacked organization.
Evers' rallies drew large crowds of blacks. The Clarion-Ledger, a leading Mississippian conservative newspaper, largely ignored his campaign. To gain attention, he unexpectedly gatecrashed the annual Fisherman's Rodeo in Pascagoula and stopped and spoke to people on the streets of Jackson during their morning commute. Police departments in rural towns were often horrified by the arrival of his campaign caravan. A total of 269 other black candidates were running for office in Mississippi that year, and many of them complained that Evers was self-absorbed and hoarding resources, despite his slim chances of winning. Evers did little to support them.
In the general election, Evers faced Democratic nominee Bill Waller and independent segregationist Thomas Pickens Brady. Waller and Evers were personally acquainted with one another, as Waller had prosecuted Beckwith for the murder of Medgar. Despite the fears of public observers, the campaign was largely devoid of racism and Evers and Waller avoided negative tactics. Though about 40 percent of the Mississippi electorate in 1971 was black, Evers only secured about 22 percent of the total vote; Waller won with 601,222 votes to Evers' 172,762 and Brady's 6,653. The night of the election, Evers shook the hands of Waller supporters in Jackson and then went to a local television station where his opponent was delivering a victory speech. Learning that Evers had arrived, Waller's nervous aides hurried the governor-elect to his car. Evers approached the car shortly before its departure and told Waller, "I just wanted to congratulate you." Waller replied, "Whaddya say, Charlie?" and his wife leaned over and shook Evers' hand.
In 1978, Evers ran as an independent for the U.S. Senate seat vacated by Democrat James Eastland. He finished in third place behind his opponents, Democrat Maurice Dantin and Republican Thad Cochran. He received 24 percent of the vote, likely siphoning off African-American votes that would have otherwise gone to Dantin. Cochran won the election with a plurality of 45 percent of the vote. With the shift in white voters moving into the Republican Party in the state (and the rest of the South), Cochran was continuously re-elected to his Senate seat. After his failed Senate race, Evers briefly switched political parties and became a Republican.
In 1983, Evers ran as an independent for governor of Mississippi but lost to the Democrat Bill Allain. Republican Leon Bramlett of Clarksdale, also known as a college All-American football player, finished second with 39 percent of the vote.
Evers endorsed Ronald Reagan for President of the United States during the 1980 United States presidential election. Evers later attracted controversy for his support of judicial nominee Charles W. Pickering, a Republican, who was nominated by President George H. W. Bush for a seat on the U.S. Court of Appeals. Evers criticized the NAACP and other organizations for opposing Pickering, as he said the candidate had a record of supporting the civil rights movement in Mississippi.
Evers befriended a range of people from sharecroppers to presidents. He was an informal adviser to politicians as diverse as Lyndon B. Johnson, George C. Wallace, Ronald Reagan and Robert F. Kennedy. Evers severely criticized such national leaders as Roy Wilkins, Stokely Carmichael, H. Rap Brown and Louis Farrakhan over various issues.
Evers was a member of the Republican Party for 30 years when he spoke warmly of the 2008 election of Barack Obama as the first black President of the United States. During the 2016 presidential election, Evers supported Donald Trump's presidential campaign.
Evers wrote two autobiographies or memoirs: Evers (1971), written with Grace Halsell and self-published; and Have No Fear, written with Andrew Szanton and published by John Wiley & Sons (1997).
Evers was briefly married to Christine Evers until their marriage ended in annulment. In 1951, Evers married Nannie L. Magee, with whom he had four daughters. The couple divorced in June 1974. Evers lived in Brandon, Mississippi, and served as station manager of WMPR 90.1 FM in Jackson.
On July 22, 2020, Evers died in Brandon at age 97.
Evers was portrayed by Bill Cobbs in the 1996 film Ghosts of Mississippi (1996). | [
{
"paragraph_id": 0,
"text": "James Charles Evers (September 11, 1922 – July 22, 2020) was an American civil rights activist, businessman, radio personality, and politician. Evers was known for his role in the civil rights movement along with his younger brother Medgar Evers. After serving in World War II, Evers began his career as a disc jockey at WHOC in Philadelphia, Mississippi. In 1954, he was made the National Association for the Advancement of Colored People (NAACP) State Voter Registration chairman. After his brother's assassination in 1963, Evers took over his position as field director of the NAACP in Mississippi. In this role, he organized and led many demonstrations for the rights of African Americans.",
"title": ""
},
{
"paragraph_id": 1,
"text": "In 1969, Evers was named \"Man of the Year\" by the NAACP. On June 3, 1969, Evers was elected in Fayette, Mississippi, as the first African-American mayor of a biracial town in Mississippi since the Reconstruction era, following passage of the Voting Rights Act of 1965 which enforced constitutional rights for citizens.",
"title": ""
},
{
"paragraph_id": 2,
"text": "At the time of Evers's election as mayor, the town of Fayette had a population of 1,600 of which 75% was African-American and almost 25% white; the white officers on the Fayette city police \"resigned rather than work under a black administration,\" according to the Associated Press. Evers told reporters \"I guess we will just have to operate with an all-black police department for the present. But I am still looking for some whites to join us in helping Fayette grow.\" Evers then outlawed the carrying of firearms within city limits.",
"title": ""
},
{
"paragraph_id": 3,
"text": "He ran for governor in 1971 and the United States Senate in 1978, both times as an independent candidate. In 1989, Evers was defeated for re-election after serving sixteen years as mayor. In his later life, he became a Republican, endorsing Ronald Reagan in 1980, and more recently Donald Trump in 2016. This diversity in party affiliations throughout his life was reflected in his fostering of friendships with people from a variety of backgrounds, as well as his advising of politicians from across the political spectrum. After his political career ended, he returned to radio and hosted his own show, Let's Talk. In 2017, Evers was inducted into the National Rhythm & Blues Hall of Fame for his contributions to the music industry.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Charles Evers was born in Decatur, Mississippi, on September 11, 1922, to James Evers, a laborer, and Jesse Wright Evers, a maid. He was the eldest of four children; Medgar Evers was his younger brother. He attended segregated public schools, which were typically underfunded in Mississippi following the exclusion of African Americans from the political system by disenfranchisement after 1890. Evers graduated from Alcorn State University in Lorman, Mississippi.",
"title": "Early life and education"
},
{
"paragraph_id": 5,
"text": "During World War II, Charles and Medgar Evers both served in the United States Army. Charles fell in love with a Philippine woman while stationed overseas. He could not marry her and bring her home to Mississippi because the state's constitution prohibited interracial marriages.",
"title": "Career"
},
{
"paragraph_id": 6,
"text": "During the war he established a brothel in Quezon City which catered to American servicemen. After serving a year of reserve duty following the Korean War, he settled in Philadelphia, Mississippi. In 1949, he began working as a disc jockey at WHOC, making him the first black disc jockey in the state. By the early 1950s, he was managing a hotel, cab company, and burial insurance business in the town. He had a cafe in Philadelphia and influenced over two hundred black citizens to pay their poll tax. Forced to leave due to local white hostility in 1956, he moved to Chicago. Low on money, he began working as a meatpacker in stockyards during the day and as an attendant for the men's restroom at the Conrad Hilton Hotel at nights. He also began pimping and ran a numbers game, taking $500 a week from the latter. He gained enough money to purchase several bars, bootlegged liquor, and sold jukeboxes.",
"title": "Career"
},
{
"paragraph_id": 7,
"text": "In Mississippi about 1951, brothers Charles and Medgar Evers grew interested in African freedom movements. They were interested in Jomo Kenyatta and the rise of the Kikuyu tribal resistance to colonialism in Kenya, known as the Mau Mau uprising as it moved to open violence. Along with his brother, Charles became active in the Regional Council of Negro Leadership (RCNL), a civil rights organization that promoted self-help and business ownership. He also helped his brother with black voter registration drives. Between 1952 and 1955, Evers often spoke at the RCNL's annual conferences in Mound Bayou, a town founded by freedmen, on such issues as voting rights. His brother Medgar continued to be involved in civil rights, becoming field secretary and head of the National Association for the Advancement of Colored People (NAACP) in Mississippi. While working in Chicago he sent money to him, not specifying the source.",
"title": "Career"
},
{
"paragraph_id": 8,
"text": "On June 12, 1963, Byron De La Beckwith, a member of a Ku Klux Klan chapter, fatally shot Evers's brother, Medgar, in Mississippi as he arrived home from work. Medgar died at the hospital in Jackson. Charles learned of his brother's death several hours later and flew to Jackson in the morning. Deeply upset by the assassination, he heavily involved himself in the planning of his brother's funeral. He decided to relocate to Mississippi to carry on his brother's work. Journalist Jason Berry, who later worked for Charles, said, \"I think he wanted to be a better person. I think Medgar's death was a cathartic experience.\" A decade after his death, Evers and blues musician B.B. King created the Medgar Evers Homecoming Festival, an annual three-day event held the first week of June in Mississippi.",
"title": "Career"
},
{
"paragraph_id": 9,
"text": "Over the opposition of more establishment figures in the National Association for the Advancement of Colored People (NAACP) such as Roy Wilkins, Evers took over his brother's post as head of the NAACP in Mississippi. Wilkins never managed a friendly relationship with Evers, and Medgar's widow, Myrlie, also disapproved of Charles' replacing him. A staunch believer in racial integration, he distrusted what he viewed as the militancy and separatism of the Student Nonviolent Coordinating Committee and the Mississippi Freedom Democratic Party, a black-dominated breakaway of the segregationist Mississippi Democratic Party. In 1965 he launched a series of successful black boycotts in southwestern Mississippi which partnered with the Natchez Deacons for Defense and Justice, which won concessions from the Natchez authorities and ratified his unconventional boycott methods. Often accompanied by a group of 65 male followers, he would pressure local blacks in small towns to avoid stores under boycott and directly challenge white business leaders. He also led a voter registration campaign. He coordinated his efforts from the small town of Fayette in Jefferson County. Fayette was a small, economically depressed town of about 2,500 people. About three-fourths of the population was black, and they had long been socially and economically subordinate to the white minority. Evers moved the NAACP's Mississippi field office from Jackson to Fayette to take advantage of the potential of the black majority and achieve political influence in Jefferson and two adjacent counties. He explained, \"My feeling is that Negroes gotta control somewhere in America, and we've dropped anchor in these counties. We are going to control these three counties in the next ten years. There is no question about it.\"",
"title": "Career"
},
{
"paragraph_id": 10,
"text": "With his voter registration drives having made Fayette's number of black registered voters double the size of the white electorate, Evers helped elect a black man to the local school board in 1966. He also established the Medgar Evers Community Center at the outskirts of town, which served as a center for registration efforts, grocery store, restaurant, and dance hall. By early 1968 he had established a network of local NAACP branches in the region. The president of each branch served as Evers' deputies, and he attended all of their meetings. That year he made a bid for the open seat of the 3rd congressional district in the U.S. House of Representatives, facing six white opponents in the Democratic primary. Though low on funds, he led in the primary with a plurality of the votes. The Mississippi Legislature responded by passing a law mandating a runoff primary in the event of no absolute majority in the initial contest, which Evers lost. He also supported Robert F. Kennedy's 1968 presidential campaign, serving as co-director of his Mississippi campaign organization, and was with Kennedy in Los Angeles when he was assassinated.",
"title": "Career"
},
{
"paragraph_id": 11,
"text": "In May 1969, Evers ran for the office of Mayor of Fayette and defeated white incumbent R. J. Allen, 386 votes to 255. This made him the first black mayor of a biracial Mississippi town (unlike the all-black Mound Bayou) since Reconstruction. Evers' election as mayor had great symbolic significance statewide and attracted national attention. The NAACP named Evers their 1969 Man of the Year. Evers popularized the slogan, \"Hands that picked cotton can now pick the mayor.\" The local white community was bitter about his victory, but he became intensively popular among Mississippi's blacks. To celebrate his victory, he hosted an inaugural ball in Natchez, which was widely attended by black Mississippians, reporters from around the country, and prominent national liberals including Ramsey Clark, Ted Sorenson, Whitney Young, Julian Bond, Shirley MacLaine, and Paul O'Dwyer. The white-dominated school board refused to let Evers swear-in on property under their jurisdiction, so he took his oath of office in a parking lot.",
"title": "Career"
},
{
"paragraph_id": 12,
"text": "Evers appointed a black police force and several black staff members. He also benefitted from an influx of young, white liberal volunteers who wanted to assist a civil rights leader. Many ended up leaving after growing disillusioned with Evers' pursuit of personal financial success and domineering leadership style. Evers sought to make Fayette an upstanding community and a symbolic refuge for black people. Repulsed by the behavior of poor blacks in the town, he ordered the police force to enforce a 25-mile per hour speed limit on local roads, banned cursing in public, and cracked down on truancy. He also prohibited the carrying of firearms in town but kept a gun on himself. He quickly responded to concerns from poor blacks while making white businessmen wait outside of his office. Rhetorically, he would vacillate between messages of racial conciliation and statements of hostility.",
"title": "Career"
},
{
"paragraph_id": 13,
"text": "Fayette's white population remained bitter about Evers' victory. Many avoided the city hall where they used to socialize and The Fayette Chronicle regularly criticized him. He argued with the county board of supervisors over his plan to erect busts of his brother, Martin Luther King Jr., and the Kennedys on the courthouse square. He told the press, \"They're cooperating because they haven't blown my head off. This is Mississippi.\" In September 1969, a Klansman drove into Fayette with a collection of weapons, intending to assassinate Evers. A white resident tipped off the mayor and the Klansman was arrested. The Klansman defended his motives by saying, \"I am a Mississippi white man\".",
"title": "Career"
},
{
"paragraph_id": 14,
"text": "Evers' moralistic style began to create discontent; in early 1970, most of Fayette's police department resigned, saying the mayor had treated them \"like dogs\". Evers complained that local blacks were \"jealous\" of him. As the judge in the municipal court, he personally issued fines for infractions such as cursing in public. He regularly ignored the input of the town board of aldermen, and town employee Charles Ramberg reported that he said he would fire municipal workers who would not vote for him. During Evers' tenure, Fayette benefitted from several federal grants, and ITT Inc. built an assembly plant in the town, but the region's economy largely remained depressed. By 1981, Jefferson County had the highest unemployment rate in the state.",
"title": "Career"
},
{
"paragraph_id": 15,
"text": "Whites' perception that Evers was venal and self-interested persisted and began to spread among the black community. This problem ballooned when in 1974 the Internal Revenue Service arranged for him to be indicted for tax evasion by failing to report $156,000 in income he garnered in the late 1960s. Prosecutors further accused him of depositing town funds in a personal bank account. His attorney told the court that Evers had indeed concealed the income, but argued that the charge was invalid since this had been done before the late 1960s, as the indictment specified. The case resulted in a mistrial, but Evers' reputation permanently suffered. In the late 1970s he used a $5,300 federal grant to renovate a building he owned which he leased to a federal day care program, and used some of the employees for personal business.",
"title": "Career"
},
{
"paragraph_id": 16,
"text": "Evers served many terms as mayor of Fayette. Admired by some, he alienated others with his inflexible stands on various issues. Evers did not like to share or delegate power. Evers lost the Democratic primary for mayor in 1981 to Kennie Middleton. Four years later, Evers defeated Middleton in the primaries and won back the office of mayor. In 1989, Evers lost the nomination once again to political rival Kennie Middleton. In his response to the defeat, Evers accepted, said he was tired, and that: \"Twenty years is enough. I'm tired of being out front. Let someone else be out front.\"",
"title": "Career"
},
{
"paragraph_id": 17,
"text": "Evers began mulling the possibility of a campaign for the office of governor in 1969. He decided to enter the 1971 gubernatorial election as an independent, kicking off his campaign with a rally in Decatur. He later explained his reason for launching the bid, saying, \"I ran for governor because if someone doesn't start running, there will never be a black man or a black woman governor of the state of Mississippi.\" He endorsed white segregationist Jimmy Swan in the Democratic primary, reasoning that if Swan won the nomination, moderate whites would be more inclined to vote for himself in the general election. He campaigned on a platform of reduced taxes—particularly for lower property taxes on the elderly, improved healthcare, and legalizing gambling along the Gulf Coast. Low on money, his candidacy was largely funded by the sale of campaign buttons and copies of his recently published autobiography. His campaign staff was largely young and inexperienced and lacked organization.",
"title": "Career"
},
{
"paragraph_id": 18,
"text": "Evers' rallies drew large crowds of blacks. The Clarion-Ledger, a leading Mississippian conservative newspaper, largely ignored his campaign. To gain attention, he unexpectedly gatecrashed the annual Fisherman's Rodeo in Pascagoula and stopped and spoke to people on the streets of Jackson during their morning commute. Police departments in rural towns were often horrified by the arrival of his campaign caravan. A total of 269 other black candidates were running for office in Mississippi that year, and many of them complained that Evers was self-absorbed and hoarding resources, despite his slim chances of winning. Evers did little to support them.",
"title": "Career"
},
{
"paragraph_id": 19,
"text": "In the general election, Evers faced Democratic nominee Bill Waller and independent segregationist Thomas Pickens Brady. Waller and Evers were personally acquainted with one another, as Waller had prosecuted Beckwith for the murder of Medgar. Despite the fears of public observers, the campaign was largely devoid of racism and Evers and Waller avoided negative tactics. Though about 40 percent of the Mississippi electorate in 1971 was black, Evers only secured about 22 percent of the total vote; Waller won with 601,222 votes to Evers' 172,762 and Brady's 6,653. The night of the election, Evers shook the hands of Waller supporters in Jackson and then went to a local television station where his opponent was delivering a victory speech. Learning that Evers had arrived, Waller's nervous aides hurried the governor-elect to his car. Evers approached the car shortly before its departure and told Waller, \"I just wanted to congratulate you.\" Waller replied, \"Whaddya say, Charlie?\" and his wife leaned over and shook Evers' hand.",
"title": "Career"
},
{
"paragraph_id": 20,
"text": "In 1978, Evers ran as an independent for the U.S. Senate seat vacated by Democrat James Eastland. He finished in third place behind his opponents, Democrat Maurice Dantin and Republican Thad Cochran. He received 24 percent of the vote, likely siphoning off African-American votes that would have otherwise gone to Dantin. Cochran won the election with a plurality of 45 percent of the vote. With the shift in white voters moving into the Republican Party in the state (and the rest of the South), Cochran was continuously re-elected to his Senate seat. After his failed Senate race, Evers briefly switched political parties and became a Republican.",
"title": "Career"
},
{
"paragraph_id": 21,
"text": "In 1983, Evers ran as an independent for governor of Mississippi but lost to the Democrat Bill Allain. Republican Leon Bramlett of Clarksdale, also known as a college All-American football player, finished second with 39 percent of the vote.",
"title": "Career"
},
{
"paragraph_id": 22,
"text": "Evers endorsed Ronald Reagan for President of the United States during the 1980 United States presidential election. Evers later attracted controversy for his support of judicial nominee Charles W. Pickering, a Republican, who was nominated by President George H. W. Bush for a seat on the U.S. Court of Appeals. Evers criticized the NAACP and other organizations for opposing Pickering, as he said the candidate had a record of supporting the civil rights movement in Mississippi.",
"title": "Career"
},
{
"paragraph_id": 23,
"text": "Evers befriended a range of people from sharecroppers to presidents. He was an informal adviser to politicians as diverse as Lyndon B. Johnson, George C. Wallace, Ronald Reagan and Robert F. Kennedy. Evers severely criticized such national leaders as Roy Wilkins, Stokely Carmichael, H. Rap Brown and Louis Farrakhan over various issues.",
"title": "Career"
},
{
"paragraph_id": 24,
"text": "Evers was a member of the Republican Party for 30 years when he spoke warmly of the 2008 election of Barack Obama as the first black President of the United States. During the 2016 presidential election, Evers supported Donald Trump's presidential campaign.",
"title": "Career"
},
{
"paragraph_id": 25,
"text": "Evers wrote two autobiographies or memoirs: Evers (1971), written with Grace Halsell and self-published; and Have No Fear, written with Andrew Szanton and published by John Wiley & Sons (1997).",
"title": "Career"
},
{
"paragraph_id": 26,
"text": "Evers was briefly married to Christine Evers until their marriage ended in annulment. In 1951, Evers married Nannie L. Magee, with whom he had four daughters. The couple divorced in June 1974. Evers lived in Brandon, Mississippi, and served as station manager of WMPR 90.1 FM in Jackson.",
"title": "Personal life"
},
{
"paragraph_id": 27,
"text": "On July 22, 2020, Evers died in Brandon at age 97.",
"title": "Personal life"
},
{
"paragraph_id": 28,
"text": "Evers was portrayed by Bill Cobbs in the 1996 film Ghosts of Mississippi (1996).",
"title": "Media portrayal"
}
] | James Charles Evers was an American civil rights activist, businessman, radio personality, and politician. Evers was known for his role in the civil rights movement along with his younger brother Medgar Evers. After serving in World War II, Evers began his career as a disc jockey at WHOC in Philadelphia, Mississippi. In 1954, he was made the National Association for the Advancement of Colored People (NAACP) State Voter Registration chairman. After his brother's assassination in 1963, Evers took over his position as field director of the NAACP in Mississippi. In this role, he organized and led many demonstrations for the rights of African Americans. In 1969, Evers was named "Man of the Year" by the NAACP. On June 3, 1969, Evers was elected in Fayette, Mississippi, as the first African-American mayor of a biracial town in Mississippi since the Reconstruction era, following passage of the Voting Rights Act of 1965 which enforced constitutional rights for citizens. At the time of Evers's election as mayor, the town of Fayette had a population of 1,600 of which 75% was African-American and almost 25% white; the white officers on the Fayette city police "resigned rather than work under a black administration," according to the Associated Press. Evers told reporters "I guess we will just have to operate with an all-black police department for the present. But I am still looking for some whites to join us in helping Fayette grow." Evers then outlawed the carrying of firearms within city limits. He ran for governor in 1971 and the United States Senate in 1978, both times as an independent candidate. In 1989, Evers was defeated for re-election after serving sixteen years as mayor. In his later life, he became a Republican, endorsing Ronald Reagan in 1980, and more recently Donald Trump in 2016. This diversity in party affiliations throughout his life was reflected in his fostering of friendships with people from a variety of backgrounds, as well as his advising of politicians from across the political spectrum. After his political career ended, he returned to radio and hosted his own show, Let's Talk. In 2017, Evers was inducted into the National Rhythm & Blues Hall of Fame for his contributions to the music industry. | 2001-11-19T16:10:24Z | 2023-12-29T16:02:00Z | [
"Template:Use mdy dates",
"Template:C-SPAN",
"Template:Civil rights movement",
"Template:Infobox officeholder",
"Template:Reflist",
"Template:Short description",
"Template:Sfn",
"Template:Efn",
"Template:Cite news",
"Template:Citation",
"Template:Authority control",
"Template:Cite book",
"Template:Spnd",
"Template:Portal",
"Template:Notelist",
"Template:Cite web",
"Template:ISBN",
"Template:Cite journal"
] | https://en.wikipedia.org/wiki/Charles_Evers |
7,143 | Code-division multiple access | Code-division multiple access (CDMA) is a channel access method used by various radio communication technologies. CDMA is an example of multiple access, where several transmitters can send information simultaneously over a single communication channel. This allows several users to share a band of frequencies (see bandwidth). To permit this without undue interference between the users, CDMA employs spread spectrum technology and a special coding scheme (where each transmitter is assigned a code).
CDMA optimizes the use of available bandwidth as it transmits over the entire frequency range and does not limit the user's frequency range.
It is used as the access method in many mobile phone standards. IS-95, also called "cdmaOne", and its 3G evolution CDMA2000, are often simply referred to as "CDMA", but UMTS, the 3G standard used by GSM carriers, also uses "wideband CDMA", or W-CDMA, as well as TD-CDMA and TD-SCDMA, as its radio technologies. Many carriers (such as AT&T and Verizon) shut down 3G CDMA-based networks in 2022, rendering handsets supporting only those protocols unusable for calls, even to 911.
It can be also used as a channel or medium access technology, like ALOHA for example or as a permanent pilot/signalling channel to allow users to synchronize their local oscillators to a common system frequency, thereby also estimating the channel parameters permanently.
In these schemes, the message is modulated on a longer spreading sequence, consisting of several chips (0es and 1es). Due to their very advantageous auto- and crosscorrelation characteristics, these spreading sequences have also been used for radar applications for many decades, where they are called Barker codes (with a very short sequence length of typically 8 to 32).
For space-based communication applications, CDMA has been used for many decades due to the large path loss and Doppler shift caused by satellite motion. CDMA is often used with binary phase-shift keying (BPSK) in its simplest form, but can be combined with any modulation scheme like (in advanced cases) quadrature amplitude modulation (QAM) or orthogonal frequency-division multiplexing (OFDM), which typically makes it very robust and efficient (and equipping them with accurate ranging capabilities, which is difficult without CDMA). Other schemes use subcarriers based on binary offset carrier modulation (BOC modulation), which is inspired by Manchester codes and enable a larger gap between the virtual center frequency and the subcarriers, which is not the case for OFDM subcarriers.
The technology of code-division multiple access channels has long been known.
In the US, one of the earliest descriptions of CDMA can be found in the summary report of Project Hartwell on "The Security of Overseas Transport", which was a summer research project carried out at the Massachusetts Institute of Technology from June to August 1950. Further research in the context of jamming and anti-jamming was carried out in 1952 at Lincoln Lab.
In the Soviet Union (USSR), the first work devoted to this subject was published in 1935 by Dmitry Ageev. It was shown that through the use of linear methods, there are three types of signal separation: frequency, time and compensatory. The technology of CDMA was used in 1957, when the young military radio engineer Leonid Kupriyanovich in Moscow made an experimental model of a wearable automatic mobile phone, called LK-1 by him, with a base station. LK-1 has a weight of 3 kg, 20–30 km operating distance, and 20–30 hours of battery life. The base station, as described by the author, could serve several customers. In 1958, Kupriyanovich made the new experimental "pocket" model of mobile phone. This phone weighed 0.5 kg. To serve more customers, Kupriyanovich proposed the device, which he called "correlator." In 1958, the USSR also started the development of the "Altai" national civil mobile phone service for cars, based on the Soviet MRT-1327 standard. The phone system weighed 11 kg (24 lb). It was placed in the trunk of the vehicles of high-ranking officials and used a standard handset in the passenger compartment. The main developers of the Altai system were VNIIS (Voronezh Science Research Institute of Communications) and GSPI (State Specialized Project Institute). In 1963 this service started in Moscow, and in 1970 Altai service was used in 30 USSR cities.
CDMA is a spread-spectrum multiple-access technique. A spread-spectrum technique spreads the bandwidth of the data uniformly for the same transmitted power. A spreading code is a pseudo-random code that has a narrow ambiguity function, unlike other narrow pulse codes. In CDMA a locally generated code runs at a much higher rate than the data to be transmitted. Data for transmission is combined by bitwise XOR (exclusive OR) with the faster code. The figure shows how a spread-spectrum signal is generated. The data signal with pulse duration of T b {\displaystyle T_{b}} (symbol period) is XORed with the code signal with pulse duration of T c {\displaystyle T_{c}} (chip period). (Note: bandwidth is proportional to 1 / T {\displaystyle 1/T} , where T {\displaystyle T} = bit time.) Therefore, the bandwidth of the data signal is 1 / T b {\displaystyle 1/T_{b}} and the bandwidth of the spread spectrum signal is 1 / T c {\displaystyle 1/T_{c}} . Since T c {\displaystyle T_{c}} is much smaller than T b {\displaystyle T_{b}} , the bandwidth of the spread-spectrum signal is much larger than the bandwidth of the original signal. The ratio T b / T c {\displaystyle T_{b}/T_{c}} is called the spreading factor or processing gain and determines to a certain extent the upper limit of the total number of users supported simultaneously by a base station.
Each user in a CDMA system uses a different code to modulate their signal. Choosing the codes used to modulate the signal is very important in the performance of CDMA systems. The best performance occurs when there is good separation between the signal of a desired user and the signals of other users. The separation of the signals is made by correlating the received signal with the locally generated code of the desired user. If the signal matches the desired user's code, then the correlation function will be high and the system can extract that signal. If the desired user's code has nothing in common with the signal, the correlation should be as close to zero as possible (thus eliminating the signal); this is referred to as cross-correlation. If the code is correlated with the signal at any time offset other than zero, the correlation should be as close to zero as possible. This is referred to as auto-correlation and is used to reject multi-path interference.
An analogy to the problem of multiple access is a room (channel) in which people wish to talk to each other simultaneously. To avoid confusion, people could take turns speaking (time division), speak at different pitches (frequency division), or speak in different languages (code division). CDMA is analogous to the last example where people speaking the same language can understand each other, but other languages are perceived as noise and rejected. Similarly, in radio CDMA, each group of users is given a shared code. Many codes occupy the same channel, but only users associated with a particular code can communicate.
In general, CDMA belongs to two basic categories: synchronous (orthogonal codes) and asynchronous (pseudorandom codes).
The digital modulation method is analogous to those used in simple radio transceivers. In the analog case, a low-frequency data signal is time-multiplied with a high-frequency pure sine-wave carrier and transmitted. This is effectively a frequency convolution (Wiener–Khinchin theorem) of the two signals, resulting in a carrier with narrow sidebands. In the digital case, the sinusoidal carrier is replaced by Walsh functions. These are binary square waves that form a complete orthonormal set. The data signal is also binary and the time multiplication is achieved with a simple XOR function. This is usually a Gilbert cell mixer in the circuitry.
Synchronous CDMA exploits mathematical properties of orthogonality between vectors representing the data strings. For example, the binary string 1011 is represented by the vector (1, 0, 1, 1). Vectors can be multiplied by taking their dot product, by summing the products of their respective components (for example, if u = (a, b) and v = (c, d), then their dot product u·v = ac + bd). If the dot product is zero, the two vectors are said to be orthogonal to each other. Some properties of the dot product aid understanding of how W-CDMA works. If vectors a and b are orthogonal, then a ⋅ b = 0 {\displaystyle \mathbf {a} \cdot \mathbf {b} =0} and:
Each user in synchronous CDMA uses a code orthogonal to the others' codes to modulate their signal. An example of 4 mutually orthogonal digital signals is shown in the figure below. Orthogonal codes have a cross-correlation equal to zero; in other words, they do not interfere with each other. In the case of IS-95, 64-bit Walsh codes are used to encode the signal to separate different users. Since each of the 64 Walsh codes is orthogonal to all other, the signals are channelized into 64 orthogonal signals. The following example demonstrates how each user's signal can be encoded and decoded.
Start with a set of vectors that are mutually orthogonal. (Although mutual orthogonality is the only condition, these vectors are usually constructed for ease of decoding, for example columns or rows from Walsh matrices.) An example of orthogonal functions is shown in the adjacent picture. These vectors will be assigned to individual users and are called the code, chip code, or chipping code. In the interest of brevity, the rest of this example uses codes v with only two bits.
Each user is associated with a different code, say v. A 1 bit is represented by transmitting a positive code v, and a 0 bit is represented by a negative code −v. For example, if v = (v0, v1) = (1, −1) and the data that the user wishes to transmit is (1, 0, 1, 1), then the transmitted symbols would be
For the purposes of this article, we call this constructed vector the transmitted vector.
Each sender has a different, unique vector v chosen from that set, but the construction method of the transmitted vector is identical.
Now, due to physical properties of interference, if two signals at a point are in phase, they add to give twice the amplitude of each signal, but if they are out of phase, they subtract and give a signal that is the difference of the amplitudes. Digitally, this behaviour can be modelled by the addition of the transmission vectors, component by component.
If sender0 has code (1, −1) and data (1, 0, 1, 1), and sender1 has code (1, 1) and data (0, 0, 1, 1), and both senders transmit simultaneously, then this table describes the coding steps:
Because signal0 and signal1 are transmitted at the same time into the air, they add to produce the raw signal
This raw signal is called an interference pattern. The receiver then extracts an intelligible signal for any known sender by combining the sender's code with the interference pattern. The following table explains how this works and shows that the signals do not interfere with one another:
Further, after decoding, all values greater than 0 are interpreted as 1, while all values less than zero are interpreted as 0. For example, after decoding, data0 is (2, −2, 2, 2), but the receiver interprets this as (1, 0, 1, 1). Values of exactly 0 mean that the sender did not transmit any data, as in the following example:
Assume signal0 = (1, −1, −1, 1, 1, −1, 1, −1) is transmitted alone. The following table shows the decode at the receiver:
When the receiver attempts to decode the signal using sender1's code, the data is all zeros; therefore the cross-correlation is equal to zero and it is clear that sender1 did not transmit any data.
When mobile-to-base links cannot be precisely coordinated, particularly due to the mobility of the handsets, a different approach is required. Since it is not mathematically possible to create signature sequences that are both orthogonal for arbitrarily random starting points and which make full use of the code space, unique "pseudo-random" or "pseudo-noise" sequences called spreading sequences are used in asynchronous CDMA systems. A spreading sequence is a binary sequence that appears random but can be reproduced in a deterministic manner by intended receivers. These spreading sequences are used to encode and decode a user's signal in asynchronous CDMA in the same manner as the orthogonal codes in synchronous CDMA (shown in the example above). These spreading sequences are statistically uncorrelated, and the sum of a large number of spreading sequences results in multiple access interference (MAI) that is approximated by a Gaussian noise process (following the central limit theorem in statistics). Gold codes are an example of a spreading sequence suitable for this purpose, as there is low correlation between the codes. If all of the users are received with the same power level, then the variance (e.g., the noise power) of the MAI increases in direct proportion to the number of users. In other words, unlike synchronous CDMA, the signals of other users will appear as noise to the signal of interest and interfere slightly with the desired signal in proportion to number of users.
All forms of CDMA use the spread-spectrum spreading factor to allow receivers to partially discriminate against unwanted signals. Signals encoded with the specified spreading sequences are received, while signals with different sequences (or the same sequences but different timing offsets) appear as wideband noise reduced by the spreading factor.
Since each user generates MAI, controlling the signal strength is an important issue with CDMA transmitters. A CDM (synchronous CDMA), TDMA, or FDMA receiver can in theory completely reject arbitrarily strong signals using different codes, time slots or frequency channels due to the orthogonality of these systems. This is not true for asynchronous CDMA; rejection of unwanted signals is only partial. If any or all of the unwanted signals are much stronger than the desired signal, they will overwhelm it. This leads to a general requirement in any asynchronous CDMA system to approximately match the various signal power levels as seen at the receiver. In CDMA cellular, the base station uses a fast closed-loop power-control scheme to tightly control each mobile's transmit power.
In 2019, schemes to precisely estimate the required length of the codes in dependence of Doppler and delay characteristics have been developed. Soon after, machine learning based techniques that generate sequences of a desired length and spreading properties have been published as well. These are highly competitive with the classic Gold and Welch sequences. These are not generated by linear-feedback-shift-registers, but have to be stored in lookup tables.
In theory CDMA, TDMA and FDMA have exactly the same spectral efficiency, but, in practice, each has its own challenges – power control in the case of CDMA, timing in the case of TDMA, and frequency generation/filtering in the case of FDMA.
TDMA systems must carefully synchronize the transmission times of all the users to ensure that they are received in the correct time slot and do not cause interference. Since this cannot be perfectly controlled in a mobile environment, each time slot must have a guard time, which reduces the probability that users will interfere, but decreases the spectral efficiency.
Similarly, FDMA systems must use a guard band between adjacent channels, due to the unpredictable Doppler shift of the signal spectrum because of user mobility. The guard bands will reduce the probability that adjacent channels will interfere, but decrease the utilization of the spectrum.
Asynchronous CDMA offers a key advantage in the flexible allocation of resources i.e. allocation of spreading sequences to active users. In the case of CDM (synchronous CDMA), TDMA, and FDMA the number of simultaneous orthogonal codes, time slots, and frequency slots respectively are fixed, hence the capacity in terms of the number of simultaneous users is limited. There are a fixed number of orthogonal codes, time slots or frequency bands that can be allocated for CDM, TDMA, and FDMA systems, which remain underutilized due to the bursty nature of telephony and packetized data transmissions. There is no strict limit to the number of users that can be supported in an asynchronous CDMA system, only a practical limit governed by the desired bit error probability since the SIR (signal-to-interference ratio) varies inversely with the number of users. In a bursty traffic environment like mobile telephony, the advantage afforded by asynchronous CDMA is that the performance (bit error rate) is allowed to fluctuate randomly, with an average value determined by the number of users times the percentage of utilization. Suppose there are 2N users that only talk half of the time, then 2N users can be accommodated with the same average bit error probability as N users that talk all of the time. The key difference here is that the bit error probability for N users talking all of the time is constant, whereas it is a random quantity (with the same mean) for 2N users talking half of the time.
In other words, asynchronous CDMA is ideally suited to a mobile network where large numbers of transmitters each generate a relatively small amount of traffic at irregular intervals. CDM (synchronous CDMA), TDMA, and FDMA systems cannot recover the underutilized resources inherent to bursty traffic due to the fixed number of orthogonal codes, time slots or frequency channels that can be assigned to individual transmitters. For instance, if there are N time slots in a TDMA system and 2N users that talk half of the time, then half of the time there will be more than N users needing to use more than N time slots. Furthermore, it would require significant overhead to continually allocate and deallocate the orthogonal-code, time-slot or frequency-channel resources. By comparison, asynchronous CDMA transmitters simply send when they have something to say and go off the air when they do not, keeping the same signature sequence as long as they are connected to the system.
Most modulation schemes try to minimize the bandwidth of this signal since bandwidth is a limited resource. However, spread-spectrum techniques use a transmission bandwidth that is several orders of magnitude greater than the minimum required signal bandwidth. One of the initial reasons for doing this was military applications including guidance and communication systems. These systems were designed using spread spectrum because of its security and resistance to jamming. Asynchronous CDMA has some level of privacy built in because the signal is spread using a pseudo-random code; this code makes the spread-spectrum signals appear random or have noise-like properties. A receiver cannot demodulate this transmission without knowledge of the pseudo-random sequence used to encode the data. CDMA is also resistant to jamming. A jamming signal only has a finite amount of power available to jam the signal. The jammer can either spread its energy over the entire bandwidth of the signal or jam only part of the entire signal.
CDMA can also effectively reject narrow-band interference. Since narrow-band interference affects only a small portion of the spread-spectrum signal, it can easily be removed through notch filtering without much loss of information. Convolution encoding and interleaving can be used to assist in recovering this lost data. CDMA signals are also resistant to multipath fading. Since the spread-spectrum signal occupies a large bandwidth, only a small portion of this will undergo fading due to multipath at any given time. Like the narrow-band interference, this will result in only a small loss of data and can be overcome.
Another reason CDMA is resistant to multipath interference is because the delayed versions of the transmitted pseudo-random codes will have poor correlation with the original pseudo-random code, and will thus appear as another user, which is ignored at the receiver. In other words, as long as the multipath channel induces at least one chip of delay, the multipath signals will arrive at the receiver such that they are shifted in time by at least one chip from the intended signal. The correlation properties of the pseudo-random codes are such that this slight delay causes the multipath to appear uncorrelated with the intended signal, and it is thus ignored.
Some CDMA devices use a rake receiver, which exploits multipath delay components to improve the performance of the system. A rake receiver combines the information from several correlators, each one tuned to a different path delay, producing a stronger version of the signal than a simple receiver with a single correlation tuned to the path delay of the strongest signal.
Frequency reuse is the ability to reuse the same radio channel frequency at other cell sites within a cellular system. In the FDMA and TDMA systems, frequency planning is an important consideration. The frequencies used in different cells must be planned carefully to ensure signals from different cells do not interfere with each other. In a CDMA system, the same frequency can be used in every cell, because channelization is done using the pseudo-random codes. Reusing the same frequency in every cell eliminates the need for frequency planning in a CDMA system; however, planning of the different pseudo-random sequences must be done to ensure that the received signal from one cell does not correlate with the signal from a nearby cell.
Since adjacent cells use the same frequencies, CDMA systems have the ability to perform soft hand-offs. Soft hand-offs allow the mobile telephone to communicate simultaneously with two or more cells. The best signal quality is selected until the hand-off is complete. This is different from hard hand-offs utilized in other cellular systems. In a hard-hand-off situation, as the mobile telephone approaches a hand-off, signal strength may vary abruptly. In contrast, CDMA systems use the soft hand-off, which is undetectable and provides a more reliable and higher-quality signal.
A novel collaborative multi-user transmission and detection scheme called collaborative CDMA has been investigated for the uplink that exploits the differences between users' fading channel signatures to increase the user capacity well beyond the spreading length in the MAI-limited environment. The authors show that it is possible to achieve this increase at a low complexity and high bit error rate performance in flat fading channels, which is a major research challenge for overloaded CDMA systems. In this approach, instead of using one sequence per user as in conventional CDMA, the authors group a small number of users to share the same spreading sequence and enable group spreading and despreading operations. The new collaborative multi-user receiver consists of two stages: group multi-user detection (MUD) stage to suppress the MAI between the groups and a low-complexity maximum-likelihood detection stage to recover jointly the co-spread users' data using minimal Euclidean-distance measure and users' channel-gain coefficients. An enhanced CDMA version known as interleave-division multiple access (IDMA) uses the orthogonal interleaving as the only means of user separation in place of signature sequence used in CDMA system. | [
{
"paragraph_id": 0,
"text": "Code-division multiple access (CDMA) is a channel access method used by various radio communication technologies. CDMA is an example of multiple access, where several transmitters can send information simultaneously over a single communication channel. This allows several users to share a band of frequencies (see bandwidth). To permit this without undue interference between the users, CDMA employs spread spectrum technology and a special coding scheme (where each transmitter is assigned a code).",
"title": ""
},
{
"paragraph_id": 1,
"text": "CDMA optimizes the use of available bandwidth as it transmits over the entire frequency range and does not limit the user's frequency range.",
"title": ""
},
{
"paragraph_id": 2,
"text": "It is used as the access method in many mobile phone standards. IS-95, also called \"cdmaOne\", and its 3G evolution CDMA2000, are often simply referred to as \"CDMA\", but UMTS, the 3G standard used by GSM carriers, also uses \"wideband CDMA\", or W-CDMA, as well as TD-CDMA and TD-SCDMA, as its radio technologies. Many carriers (such as AT&T and Verizon) shut down 3G CDMA-based networks in 2022, rendering handsets supporting only those protocols unusable for calls, even to 911.",
"title": ""
},
{
"paragraph_id": 3,
"text": "It can be also used as a channel or medium access technology, like ALOHA for example or as a permanent pilot/signalling channel to allow users to synchronize their local oscillators to a common system frequency, thereby also estimating the channel parameters permanently.",
"title": ""
},
{
"paragraph_id": 4,
"text": "In these schemes, the message is modulated on a longer spreading sequence, consisting of several chips (0es and 1es). Due to their very advantageous auto- and crosscorrelation characteristics, these spreading sequences have also been used for radar applications for many decades, where they are called Barker codes (with a very short sequence length of typically 8 to 32).",
"title": ""
},
{
"paragraph_id": 5,
"text": "For space-based communication applications, CDMA has been used for many decades due to the large path loss and Doppler shift caused by satellite motion. CDMA is often used with binary phase-shift keying (BPSK) in its simplest form, but can be combined with any modulation scheme like (in advanced cases) quadrature amplitude modulation (QAM) or orthogonal frequency-division multiplexing (OFDM), which typically makes it very robust and efficient (and equipping them with accurate ranging capabilities, which is difficult without CDMA). Other schemes use subcarriers based on binary offset carrier modulation (BOC modulation), which is inspired by Manchester codes and enable a larger gap between the virtual center frequency and the subcarriers, which is not the case for OFDM subcarriers.",
"title": ""
},
{
"paragraph_id": 6,
"text": "The technology of code-division multiple access channels has long been known.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "In the US, one of the earliest descriptions of CDMA can be found in the summary report of Project Hartwell on \"The Security of Overseas Transport\", which was a summer research project carried out at the Massachusetts Institute of Technology from June to August 1950. Further research in the context of jamming and anti-jamming was carried out in 1952 at Lincoln Lab.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "In the Soviet Union (USSR), the first work devoted to this subject was published in 1935 by Dmitry Ageev. It was shown that through the use of linear methods, there are three types of signal separation: frequency, time and compensatory. The technology of CDMA was used in 1957, when the young military radio engineer Leonid Kupriyanovich in Moscow made an experimental model of a wearable automatic mobile phone, called LK-1 by him, with a base station. LK-1 has a weight of 3 kg, 20–30 km operating distance, and 20–30 hours of battery life. The base station, as described by the author, could serve several customers. In 1958, Kupriyanovich made the new experimental \"pocket\" model of mobile phone. This phone weighed 0.5 kg. To serve more customers, Kupriyanovich proposed the device, which he called \"correlator.\" In 1958, the USSR also started the development of the \"Altai\" national civil mobile phone service for cars, based on the Soviet MRT-1327 standard. The phone system weighed 11 kg (24 lb). It was placed in the trunk of the vehicles of high-ranking officials and used a standard handset in the passenger compartment. The main developers of the Altai system were VNIIS (Voronezh Science Research Institute of Communications) and GSPI (State Specialized Project Institute). In 1963 this service started in Moscow, and in 1970 Altai service was used in 30 USSR cities.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "CDMA is a spread-spectrum multiple-access technique. A spread-spectrum technique spreads the bandwidth of the data uniformly for the same transmitted power. A spreading code is a pseudo-random code that has a narrow ambiguity function, unlike other narrow pulse codes. In CDMA a locally generated code runs at a much higher rate than the data to be transmitted. Data for transmission is combined by bitwise XOR (exclusive OR) with the faster code. The figure shows how a spread-spectrum signal is generated. The data signal with pulse duration of T b {\\displaystyle T_{b}} (symbol period) is XORed with the code signal with pulse duration of T c {\\displaystyle T_{c}} (chip period). (Note: bandwidth is proportional to 1 / T {\\displaystyle 1/T} , where T {\\displaystyle T} = bit time.) Therefore, the bandwidth of the data signal is 1 / T b {\\displaystyle 1/T_{b}} and the bandwidth of the spread spectrum signal is 1 / T c {\\displaystyle 1/T_{c}} . Since T c {\\displaystyle T_{c}} is much smaller than T b {\\displaystyle T_{b}} , the bandwidth of the spread-spectrum signal is much larger than the bandwidth of the original signal. The ratio T b / T c {\\displaystyle T_{b}/T_{c}} is called the spreading factor or processing gain and determines to a certain extent the upper limit of the total number of users supported simultaneously by a base station.",
"title": "Steps in CDMA modulation"
},
{
"paragraph_id": 10,
"text": "Each user in a CDMA system uses a different code to modulate their signal. Choosing the codes used to modulate the signal is very important in the performance of CDMA systems. The best performance occurs when there is good separation between the signal of a desired user and the signals of other users. The separation of the signals is made by correlating the received signal with the locally generated code of the desired user. If the signal matches the desired user's code, then the correlation function will be high and the system can extract that signal. If the desired user's code has nothing in common with the signal, the correlation should be as close to zero as possible (thus eliminating the signal); this is referred to as cross-correlation. If the code is correlated with the signal at any time offset other than zero, the correlation should be as close to zero as possible. This is referred to as auto-correlation and is used to reject multi-path interference.",
"title": "Steps in CDMA modulation"
},
{
"paragraph_id": 11,
"text": "An analogy to the problem of multiple access is a room (channel) in which people wish to talk to each other simultaneously. To avoid confusion, people could take turns speaking (time division), speak at different pitches (frequency division), or speak in different languages (code division). CDMA is analogous to the last example where people speaking the same language can understand each other, but other languages are perceived as noise and rejected. Similarly, in radio CDMA, each group of users is given a shared code. Many codes occupy the same channel, but only users associated with a particular code can communicate.",
"title": "Steps in CDMA modulation"
},
{
"paragraph_id": 12,
"text": "In general, CDMA belongs to two basic categories: synchronous (orthogonal codes) and asynchronous (pseudorandom codes).",
"title": "Steps in CDMA modulation"
},
{
"paragraph_id": 13,
"text": "The digital modulation method is analogous to those used in simple radio transceivers. In the analog case, a low-frequency data signal is time-multiplied with a high-frequency pure sine-wave carrier and transmitted. This is effectively a frequency convolution (Wiener–Khinchin theorem) of the two signals, resulting in a carrier with narrow sidebands. In the digital case, the sinusoidal carrier is replaced by Walsh functions. These are binary square waves that form a complete orthonormal set. The data signal is also binary and the time multiplication is achieved with a simple XOR function. This is usually a Gilbert cell mixer in the circuitry.",
"title": "Code-division multiplexing (synchronous CDMA)"
},
{
"paragraph_id": 14,
"text": "Synchronous CDMA exploits mathematical properties of orthogonality between vectors representing the data strings. For example, the binary string 1011 is represented by the vector (1, 0, 1, 1). Vectors can be multiplied by taking their dot product, by summing the products of their respective components (for example, if u = (a, b) and v = (c, d), then their dot product u·v = ac + bd). If the dot product is zero, the two vectors are said to be orthogonal to each other. Some properties of the dot product aid understanding of how W-CDMA works. If vectors a and b are orthogonal, then a ⋅ b = 0 {\\displaystyle \\mathbf {a} \\cdot \\mathbf {b} =0} and:",
"title": "Code-division multiplexing (synchronous CDMA)"
},
{
"paragraph_id": 15,
"text": "Each user in synchronous CDMA uses a code orthogonal to the others' codes to modulate their signal. An example of 4 mutually orthogonal digital signals is shown in the figure below. Orthogonal codes have a cross-correlation equal to zero; in other words, they do not interfere with each other. In the case of IS-95, 64-bit Walsh codes are used to encode the signal to separate different users. Since each of the 64 Walsh codes is orthogonal to all other, the signals are channelized into 64 orthogonal signals. The following example demonstrates how each user's signal can be encoded and decoded.",
"title": "Code-division multiplexing (synchronous CDMA)"
},
{
"paragraph_id": 16,
"text": "Start with a set of vectors that are mutually orthogonal. (Although mutual orthogonality is the only condition, these vectors are usually constructed for ease of decoding, for example columns or rows from Walsh matrices.) An example of orthogonal functions is shown in the adjacent picture. These vectors will be assigned to individual users and are called the code, chip code, or chipping code. In the interest of brevity, the rest of this example uses codes v with only two bits.",
"title": "Code-division multiplexing (synchronous CDMA)"
},
{
"paragraph_id": 17,
"text": "Each user is associated with a different code, say v. A 1 bit is represented by transmitting a positive code v, and a 0 bit is represented by a negative code −v. For example, if v = (v0, v1) = (1, −1) and the data that the user wishes to transmit is (1, 0, 1, 1), then the transmitted symbols would be",
"title": "Code-division multiplexing (synchronous CDMA)"
},
{
"paragraph_id": 18,
"text": "For the purposes of this article, we call this constructed vector the transmitted vector.",
"title": "Code-division multiplexing (synchronous CDMA)"
},
{
"paragraph_id": 19,
"text": "Each sender has a different, unique vector v chosen from that set, but the construction method of the transmitted vector is identical.",
"title": "Code-division multiplexing (synchronous CDMA)"
},
{
"paragraph_id": 20,
"text": "Now, due to physical properties of interference, if two signals at a point are in phase, they add to give twice the amplitude of each signal, but if they are out of phase, they subtract and give a signal that is the difference of the amplitudes. Digitally, this behaviour can be modelled by the addition of the transmission vectors, component by component.",
"title": "Code-division multiplexing (synchronous CDMA)"
},
{
"paragraph_id": 21,
"text": "If sender0 has code (1, −1) and data (1, 0, 1, 1), and sender1 has code (1, 1) and data (0, 0, 1, 1), and both senders transmit simultaneously, then this table describes the coding steps:",
"title": "Code-division multiplexing (synchronous CDMA)"
},
{
"paragraph_id": 22,
"text": "Because signal0 and signal1 are transmitted at the same time into the air, they add to produce the raw signal",
"title": "Code-division multiplexing (synchronous CDMA)"
},
{
"paragraph_id": 23,
"text": "This raw signal is called an interference pattern. The receiver then extracts an intelligible signal for any known sender by combining the sender's code with the interference pattern. The following table explains how this works and shows that the signals do not interfere with one another:",
"title": "Code-division multiplexing (synchronous CDMA)"
},
{
"paragraph_id": 24,
"text": "Further, after decoding, all values greater than 0 are interpreted as 1, while all values less than zero are interpreted as 0. For example, after decoding, data0 is (2, −2, 2, 2), but the receiver interprets this as (1, 0, 1, 1). Values of exactly 0 mean that the sender did not transmit any data, as in the following example:",
"title": "Code-division multiplexing (synchronous CDMA)"
},
{
"paragraph_id": 25,
"text": "Assume signal0 = (1, −1, −1, 1, 1, −1, 1, −1) is transmitted alone. The following table shows the decode at the receiver:",
"title": "Code-division multiplexing (synchronous CDMA)"
},
{
"paragraph_id": 26,
"text": "When the receiver attempts to decode the signal using sender1's code, the data is all zeros; therefore the cross-correlation is equal to zero and it is clear that sender1 did not transmit any data.",
"title": "Code-division multiplexing (synchronous CDMA)"
},
{
"paragraph_id": 27,
"text": "When mobile-to-base links cannot be precisely coordinated, particularly due to the mobility of the handsets, a different approach is required. Since it is not mathematically possible to create signature sequences that are both orthogonal for arbitrarily random starting points and which make full use of the code space, unique \"pseudo-random\" or \"pseudo-noise\" sequences called spreading sequences are used in asynchronous CDMA systems. A spreading sequence is a binary sequence that appears random but can be reproduced in a deterministic manner by intended receivers. These spreading sequences are used to encode and decode a user's signal in asynchronous CDMA in the same manner as the orthogonal codes in synchronous CDMA (shown in the example above). These spreading sequences are statistically uncorrelated, and the sum of a large number of spreading sequences results in multiple access interference (MAI) that is approximated by a Gaussian noise process (following the central limit theorem in statistics). Gold codes are an example of a spreading sequence suitable for this purpose, as there is low correlation between the codes. If all of the users are received with the same power level, then the variance (e.g., the noise power) of the MAI increases in direct proportion to the number of users. In other words, unlike synchronous CDMA, the signals of other users will appear as noise to the signal of interest and interfere slightly with the desired signal in proportion to number of users.",
"title": "Asynchronous CDMA"
},
{
"paragraph_id": 28,
"text": "All forms of CDMA use the spread-spectrum spreading factor to allow receivers to partially discriminate against unwanted signals. Signals encoded with the specified spreading sequences are received, while signals with different sequences (or the same sequences but different timing offsets) appear as wideband noise reduced by the spreading factor.",
"title": "Asynchronous CDMA"
},
{
"paragraph_id": 29,
"text": "Since each user generates MAI, controlling the signal strength is an important issue with CDMA transmitters. A CDM (synchronous CDMA), TDMA, or FDMA receiver can in theory completely reject arbitrarily strong signals using different codes, time slots or frequency channels due to the orthogonality of these systems. This is not true for asynchronous CDMA; rejection of unwanted signals is only partial. If any or all of the unwanted signals are much stronger than the desired signal, they will overwhelm it. This leads to a general requirement in any asynchronous CDMA system to approximately match the various signal power levels as seen at the receiver. In CDMA cellular, the base station uses a fast closed-loop power-control scheme to tightly control each mobile's transmit power.",
"title": "Asynchronous CDMA"
},
{
"paragraph_id": 30,
"text": "In 2019, schemes to precisely estimate the required length of the codes in dependence of Doppler and delay characteristics have been developed. Soon after, machine learning based techniques that generate sequences of a desired length and spreading properties have been published as well. These are highly competitive with the classic Gold and Welch sequences. These are not generated by linear-feedback-shift-registers, but have to be stored in lookup tables.",
"title": "Asynchronous CDMA"
},
{
"paragraph_id": 31,
"text": "In theory CDMA, TDMA and FDMA have exactly the same spectral efficiency, but, in practice, each has its own challenges – power control in the case of CDMA, timing in the case of TDMA, and frequency generation/filtering in the case of FDMA.",
"title": "Asynchronous CDMA"
},
{
"paragraph_id": 32,
"text": "TDMA systems must carefully synchronize the transmission times of all the users to ensure that they are received in the correct time slot and do not cause interference. Since this cannot be perfectly controlled in a mobile environment, each time slot must have a guard time, which reduces the probability that users will interfere, but decreases the spectral efficiency.",
"title": "Asynchronous CDMA"
},
{
"paragraph_id": 33,
"text": "Similarly, FDMA systems must use a guard band between adjacent channels, due to the unpredictable Doppler shift of the signal spectrum because of user mobility. The guard bands will reduce the probability that adjacent channels will interfere, but decrease the utilization of the spectrum.",
"title": "Asynchronous CDMA"
},
{
"paragraph_id": 34,
"text": "Asynchronous CDMA offers a key advantage in the flexible allocation of resources i.e. allocation of spreading sequences to active users. In the case of CDM (synchronous CDMA), TDMA, and FDMA the number of simultaneous orthogonal codes, time slots, and frequency slots respectively are fixed, hence the capacity in terms of the number of simultaneous users is limited. There are a fixed number of orthogonal codes, time slots or frequency bands that can be allocated for CDM, TDMA, and FDMA systems, which remain underutilized due to the bursty nature of telephony and packetized data transmissions. There is no strict limit to the number of users that can be supported in an asynchronous CDMA system, only a practical limit governed by the desired bit error probability since the SIR (signal-to-interference ratio) varies inversely with the number of users. In a bursty traffic environment like mobile telephony, the advantage afforded by asynchronous CDMA is that the performance (bit error rate) is allowed to fluctuate randomly, with an average value determined by the number of users times the percentage of utilization. Suppose there are 2N users that only talk half of the time, then 2N users can be accommodated with the same average bit error probability as N users that talk all of the time. The key difference here is that the bit error probability for N users talking all of the time is constant, whereas it is a random quantity (with the same mean) for 2N users talking half of the time.",
"title": "Asynchronous CDMA"
},
{
"paragraph_id": 35,
"text": "In other words, asynchronous CDMA is ideally suited to a mobile network where large numbers of transmitters each generate a relatively small amount of traffic at irregular intervals. CDM (synchronous CDMA), TDMA, and FDMA systems cannot recover the underutilized resources inherent to bursty traffic due to the fixed number of orthogonal codes, time slots or frequency channels that can be assigned to individual transmitters. For instance, if there are N time slots in a TDMA system and 2N users that talk half of the time, then half of the time there will be more than N users needing to use more than N time slots. Furthermore, it would require significant overhead to continually allocate and deallocate the orthogonal-code, time-slot or frequency-channel resources. By comparison, asynchronous CDMA transmitters simply send when they have something to say and go off the air when they do not, keeping the same signature sequence as long as they are connected to the system.",
"title": "Asynchronous CDMA"
},
{
"paragraph_id": 36,
"text": "Most modulation schemes try to minimize the bandwidth of this signal since bandwidth is a limited resource. However, spread-spectrum techniques use a transmission bandwidth that is several orders of magnitude greater than the minimum required signal bandwidth. One of the initial reasons for doing this was military applications including guidance and communication systems. These systems were designed using spread spectrum because of its security and resistance to jamming. Asynchronous CDMA has some level of privacy built in because the signal is spread using a pseudo-random code; this code makes the spread-spectrum signals appear random or have noise-like properties. A receiver cannot demodulate this transmission without knowledge of the pseudo-random sequence used to encode the data. CDMA is also resistant to jamming. A jamming signal only has a finite amount of power available to jam the signal. The jammer can either spread its energy over the entire bandwidth of the signal or jam only part of the entire signal.",
"title": "Asynchronous CDMA"
},
{
"paragraph_id": 37,
"text": "CDMA can also effectively reject narrow-band interference. Since narrow-band interference affects only a small portion of the spread-spectrum signal, it can easily be removed through notch filtering without much loss of information. Convolution encoding and interleaving can be used to assist in recovering this lost data. CDMA signals are also resistant to multipath fading. Since the spread-spectrum signal occupies a large bandwidth, only a small portion of this will undergo fading due to multipath at any given time. Like the narrow-band interference, this will result in only a small loss of data and can be overcome.",
"title": "Asynchronous CDMA"
},
{
"paragraph_id": 38,
"text": "Another reason CDMA is resistant to multipath interference is because the delayed versions of the transmitted pseudo-random codes will have poor correlation with the original pseudo-random code, and will thus appear as another user, which is ignored at the receiver. In other words, as long as the multipath channel induces at least one chip of delay, the multipath signals will arrive at the receiver such that they are shifted in time by at least one chip from the intended signal. The correlation properties of the pseudo-random codes are such that this slight delay causes the multipath to appear uncorrelated with the intended signal, and it is thus ignored.",
"title": "Asynchronous CDMA"
},
{
"paragraph_id": 39,
"text": "Some CDMA devices use a rake receiver, which exploits multipath delay components to improve the performance of the system. A rake receiver combines the information from several correlators, each one tuned to a different path delay, producing a stronger version of the signal than a simple receiver with a single correlation tuned to the path delay of the strongest signal.",
"title": "Asynchronous CDMA"
},
{
"paragraph_id": 40,
"text": "Frequency reuse is the ability to reuse the same radio channel frequency at other cell sites within a cellular system. In the FDMA and TDMA systems, frequency planning is an important consideration. The frequencies used in different cells must be planned carefully to ensure signals from different cells do not interfere with each other. In a CDMA system, the same frequency can be used in every cell, because channelization is done using the pseudo-random codes. Reusing the same frequency in every cell eliminates the need for frequency planning in a CDMA system; however, planning of the different pseudo-random sequences must be done to ensure that the received signal from one cell does not correlate with the signal from a nearby cell.",
"title": "Asynchronous CDMA"
},
{
"paragraph_id": 41,
"text": "Since adjacent cells use the same frequencies, CDMA systems have the ability to perform soft hand-offs. Soft hand-offs allow the mobile telephone to communicate simultaneously with two or more cells. The best signal quality is selected until the hand-off is complete. This is different from hard hand-offs utilized in other cellular systems. In a hard-hand-off situation, as the mobile telephone approaches a hand-off, signal strength may vary abruptly. In contrast, CDMA systems use the soft hand-off, which is undetectable and provides a more reliable and higher-quality signal.",
"title": "Asynchronous CDMA"
},
{
"paragraph_id": 42,
"text": "A novel collaborative multi-user transmission and detection scheme called collaborative CDMA has been investigated for the uplink that exploits the differences between users' fading channel signatures to increase the user capacity well beyond the spreading length in the MAI-limited environment. The authors show that it is possible to achieve this increase at a low complexity and high bit error rate performance in flat fading channels, which is a major research challenge for overloaded CDMA systems. In this approach, instead of using one sequence per user as in conventional CDMA, the authors group a small number of users to share the same spreading sequence and enable group spreading and despreading operations. The new collaborative multi-user receiver consists of two stages: group multi-user detection (MUD) stage to suppress the MAI between the groups and a low-complexity maximum-likelihood detection stage to recover jointly the co-spread users' data using minimal Euclidean-distance measure and users' channel-gain coefficients. An enhanced CDMA version known as interleave-division multiple access (IDMA) uses the orthogonal interleaving as the only means of user separation in place of signature sequence used in CDMA system.",
"title": "Collaborative CDMA"
}
] | Code-division multiple access (CDMA) is a channel access method used by various radio communication technologies. CDMA is an example of multiple access, where several transmitters can send information simultaneously over a single communication channel. This allows several users to share a band of frequencies. To permit this without undue interference between the users, CDMA employs spread spectrum technology and a special coding scheme. CDMA optimizes the use of available bandwidth as it transmits over the entire frequency range and does not limit the user's frequency range. It is used as the access method in many mobile phone standards. IS-95, also called "cdmaOne", and its 3G evolution CDMA2000, are often simply referred to as "CDMA", but UMTS, the 3G standard used by GSM carriers, also uses "wideband CDMA", or W-CDMA, as well as TD-CDMA and TD-SCDMA, as its radio technologies. Many carriers shut down 3G CDMA-based networks in 2022, rendering handsets supporting only those protocols unusable for calls, even to 911. It can be also used as a channel or medium access technology, like ALOHA for example or as a permanent pilot/signalling channel to allow users to synchronize their local oscillators to a common system frequency, thereby also estimating the channel parameters permanently. In these schemes, the message is modulated on a longer spreading sequence, consisting of several chips. Due to their very advantageous auto- and crosscorrelation characteristics, these spreading sequences have also been used for radar applications for many decades, where they are called Barker codes. For space-based communication applications, CDMA has been used for many decades due to the large path loss and Doppler shift caused by satellite motion. CDMA is often used with binary phase-shift keying (BPSK) in its simplest form, but can be combined with any modulation scheme like quadrature amplitude modulation (QAM) or orthogonal frequency-division multiplexing (OFDM), which typically makes it very robust and efficient. Other schemes use subcarriers based on binary offset carrier modulation, which is inspired by Manchester codes and enable a larger gap between the virtual center frequency and the subcarriers, which is not the case for OFDM subcarriers. | 2002-02-25T15:51:15Z | 2023-11-04T02:11:43Z | [
"Template:Authority control",
"Template:Clarify",
"Template:Div col",
"Template:Cdma",
"Template:Cite journal",
"Template:See also",
"Template:Cite book",
"Template:Cite web",
"Template:Reflist",
"Template:Commons category",
"Template:Channel access methods",
"Template:Multiplex techniques",
"Template:Refn",
"Template:Div col end",
"Template:Cite patent",
"Template:Cite conference",
"Template:Cite news",
"Template:Short description",
"Template:About",
"Template:Convert"
] | https://en.wikipedia.org/wiki/Code-division_multiple_access |
7,144 | Internet filter | An Internet filter is software that restricts or controls the content an Internet user is capable to access, especially when utilized to restrict material delivered over the Internet via the Web, Email, or other means. Content-control software determines what content will be available or be blocked.
Such restrictions can be applied at various levels: a government can attempt to apply them nationwide (see Internet censorship), or they can, for example, be applied by an Internet service provider to its clients, by an employer to its personnel, by a school to its students, by a library to its visitors, by a parent to a child's computer, or by an individual users to their own computers.
The motive is often to prevent access to content which the computer's owner(s) or other authorities may consider objectionable. When imposed without the consent of the user, content control can be characterised as a form of internet censorship. Some content-control software includes time control functions that empowers parents to set the amount of time that child may spend accessing the Internet or playing games or other computer activities.
In some countries, such software is ubiquitous. In Cuba, if a computer user at a government-controlled Internet cafe types certain words, the word processor or web browser is automatically closed, and a "state security" warning is given.
The term "content control" is used on occasion by CNN, Playboy magazine, the San Francisco Chronicle, and The New York Times. However, several other terms, including "content filtering software", "filtering proxy servers", "secure web gateways", "censorware", "content security and control", "web filtering software", "content-censoring software", and "content-blocking software", are often used. "Nannyware" has also been used in both product marketing and by the media. Industry research company Gartner uses "secure web gateway" (SWG) to describe the market segment.
Companies that make products that selectively block Web sites do not refer to these products as censorware, and prefer terms such as "Internet filter" or "URL Filter"; in the specialized case of software specifically designed to allow parents to monitor and restrict the access of their children, "parental control software" is also used. Some products log all sites that a user accesses and rates them based on content type for reporting to an "accountability partner" of the person's choosing, and the term accountability software is used. Internet filters, parental control software, and/or accountability software may also be combined into one product.
Those critical of such software, however, use the term "censorware" freely: consider the Censorware Project, for example. The use of the term "censorware" in editorials criticizing makers of such software is widespread and covers many different varieties and applications: Xeni Jardin used the term in a 9 March 2006 editorial in The New York Times, when discussing the use of American-made filtering software to suppress content in China; in the same month a high school student used the term to discuss the deployment of such software in his school district.
In general, outside of editorial pages as described above, traditional newspapers do not use the term "censorware" in their reporting, preferring instead to use less overtly controversial terms such as "content filter", "content control", or "web filtering"; The New York Times and The Wall Street Journal both appear to follow this practice. On the other hand, Web-based newspapers such as CNET use the term in both editorial and journalistic contexts, for example "Windows Live to Get Censorware."
Filters can be implemented in many different ways: by software on a personal computer, via network infrastructure such as proxy servers, DNS servers, or firewalls that provide Internet access. No solution provides complete coverage, so most companies deploy a mix of technologies to achieve the proper content control in line with their policies.
The Internet does not intrinsically provide content blocking, and therefore there is much content on the Internet that is considered unsuitable for children, given that much content is given certifications as suitable for adults only, e.g. 18-rated games and movies.
Internet service providers (ISPs) that block material containing pornography, or controversial religious, political, or news-related content en route are often utilized by parents who do not permit their children to access content not conforming to their personal beliefs. Content filtering software can, however, also be used to block malware and other content that is or contains hostile, intrusive, or annoying material including adware, spam, computer viruses, worms, trojan horses, and spyware.
Most content control software is marketed to organizations or parents. It is, however, also marketed on occasion to facilitate self-censorship, for example by people struggling with addictions to online pornography, gambling, chat rooms, etc. Self-censorship software may also be utilised by some in order to avoid viewing content they consider immoral, inappropriate, or simply distracting. A number of accountability software products are marketed as self-censorship or accountability software. These are often promoted by religious media and at religious gatherings.
Utilizing a filter that is overly zealous at filtering content, or mislabels content not intended to be censored can result in over blocking, or over-censoring. Over blocking can filter out material that should be acceptable under the filtering policy in effect, for example health related information may unintentionally be filtered along with porn-related material because of the Scunthorpe problem. Filter administrators may prefer to err on the side of caution by accepting over blocking to prevent any risk of access to sites that they determine to be undesirable. Content-control software was mentioned as blocking access to Beaver College before its name change to Arcadia University. Another example was the filtering of Horniman Museum. As well, over-blocking may encourage users to bypass the filter entirely.
Whenever new information is uploaded to the Internet, filters can under block, or under-censor, content if the parties responsible for maintaining the filters do not update them quickly and accurately, and a blacklisting rather than a whitelisting filtering policy is in place.
Many would not be satisfied with government filtering viewpoints on moral or political issues, agreeing that this could become support for propaganda. Many would also find it unacceptable that an ISP, whether by law or by the ISP's own choice, should deploy such software without allowing the users to disable the filtering for their own connections. In the United States, the First Amendment to the United States Constitution has been cited in calls to criminalise forced internet censorship. (See section below)
In 1998, a United States federal district court in Virginia ruled (Loudoun v. Board of Trustees of the Loudoun County Library) that the imposition of mandatory filtering in a public library violates the First Amendment.
In 1996 the US Congress passed the Communications Decency Act, banning indecency on the Internet. Civil liberties groups challenged the law under the First Amendment, and in 1997 the Supreme Court ruled in their favor. Part of the civil liberties argument, especially from groups like the Electronic Frontier Foundation, was that parents who wanted to block sites could use their own content-filtering software, making government involvement unnecessary.
In the late 1990s, groups such as the Censorware Project began reverse-engineering the content-control software and decrypting the blacklists to determine what kind of sites the software blocked. This led to legal action alleging violation of the "Cyber Patrol" license agreement. They discovered that such tools routinely blocked unobjectionable sites while also failing to block intended targets.
Some content-control software companies responded by claiming that their filtering criteria were backed by intensive manual checking. The companies' opponents argued, on the other hand, that performing the necessary checking would require resources greater than the companies possessed and that therefore their claims were not valid.
The Motion Picture Association successfully obtained a UK ruling enforcing ISPs to use content-control software to prevent copyright infringement by their subscribers.
Many types of content-control software have been shown to block sites based on the religious and political leanings of the company owners. Examples include blocking several religious sites (including the Web site of the Vatican), many political sites, and homosexuality-related sites. X-Stop was shown to block sites such as the Quaker web site, the National Journal of Sexual Orientation Law, The Heritage Foundation, and parts of The Ethical Spectacle. CYBERsitter blocks out sites like National Organization for Women. Nancy Willard, an academic researcher and attorney, pointed out that many U.S. public schools and libraries use the same filtering software that many Christian organizations use. Cyber Patrol, a product developed by The Anti-Defamation League and Mattel's The Learning Company, has been found to block not only political sites it deems to be engaging in 'hate speech' but also human rights web sites, such as Amnesty International's web page about Israel and gay-rights web sites, such as glaad.org.
Content labeling may be considered another form of content-control software. In 1994, the Internet Content Rating Association (ICRA) — now part of the Family Online Safety Institute — developed a content rating system for online content providers. Using an online questionnaire a webmaster describes the nature of their web content. A small file is generated that contains a condensed, computer readable digest of this description that can then be used by content filtering software to block or allow that site.
ICRA labels come in a variety of formats. These include the World Wide Web Consortium's Resource Description Framework (RDF) as well as Platform for Internet Content Selection (PICS) labels used by Microsoft's Internet Explorer Content Advisor.
ICRA labels are an example of self-labeling. Similarly, in 2006 the Association of Sites Advocating Child Protection (ASACP) initiated the Restricted to Adults self-labeling initiative. ASACP members were concerned that various forms of legislation being proposed in the United States were going to have the effect of forcing adult companies to label their content. The RTA label, unlike ICRA labels, does not require a webmaster to fill out a questionnaire or sign up to use. Like ICRA the RTA label is free. Both labels are recognized by a wide variety of content-control software.
The Voluntary Content Rating (VCR) system was devised by Solid Oak Software for their CYBERsitter filtering software, as an alternative to the PICS system, which some critics deemed too complex. It employs HTML metadata tags embedded within web page documents to specify the type of content contained in the document. Only two levels are specified, mature and adult, making the specification extremely simple.
The Australian Internet Safety Advisory Body has information about "practical advice on Internet safety, parental control and filters for the protection of children, students and families" that also includes public libraries.
NetAlert, the software made available free of charge by the Australian government, was allegedly cracked by a 16-year-old student, Tom Wood, less than a week after its release in August 2007. Wood supposedly bypassed the $84 million filter in about half an hour to highlight problems with the government's approach to Internet content filtering.
The Australian Government has introduced legislation that requires ISP's to "restrict access to age restricted content (commercial MA15+ content and R18+ content) either hosted in Australia or provided from Australia" that was due to commence from 20 January 2008, known as Cleanfeed.
Cleanfeed is a proposed mandatory ISP level content filtration system. It was proposed by the Beazley led Australian Labor Party opposition in a 2006 press release, with the intention of protecting children who were vulnerable due to claimed parental computer illiteracy. It was announced on 31 December 2007 as a policy to be implemented by the Rudd ALP government, and initial tests in Tasmania have produced a 2008 report. Cleanfeed is funded in the current budget, and is moving towards an Expression of Interest for live testing with ISPs in 2008. Public opposition and criticism have emerged, led by the EFA and gaining irregular mainstream media attention, with a majority of Australians reportedly "strongly against" its implementation. Criticisms include its expense, inaccuracy (it will be impossible to ensure only illegal sites are blocked) and the fact that it will be compulsory, which can be seen as an intrusion on free speech rights. Another major criticism point has been that although the filter is claimed to stop certain materials, the underground rings dealing in such materials will not be affected. The filter might also provide a false sense of security for parents, who might supervise children less while using the Internet, achieving the exact opposite effect. Cleanfeed is a responsibility of Senator Conroy's portfolio.
In Denmark it is stated policy that it will "prevent inappropriate Internet sites from being accessed from children's libraries across Denmark". "'It is important that every library in the country has the opportunity to protect children against pornographic material when they are using library computers. It is a main priority for me as Culture Minister to make sure children can surf the net safely at libraries,' states Brian Mikkelsen in a press-release of the Danish Ministry of Culture."
Many libraries in the UK such as the British Library and local authority public libraries apply filters to Internet access. According to research conducted by the Radical Librarians Collective, at least 98% of public libraries apply filters; including categories such as "LGBT interest", "abortion" and "questionable". Some public libraries block Payday loan websites
The use of Internet filters or content-control software varies widely in public libraries in the United States, since Internet use policies are established by the local library board. Many libraries adopted Internet filters after Congress conditioned the receipt of universal service discounts on the use of Internet filters through the Children's Internet Protection Act (CIPA). Other libraries do not install content control software, believing that acceptable use policies and educational efforts address the issue of children accessing age-inappropriate content while preserving adult users' right to freely access information. Some libraries use Internet filters on computers used by children only. Some libraries that employ content-control software allow the software to be deactivated on a case-by-case basis on application to a librarian; libraries that are subject to CIPA are required to have a policy that allows adults to request that the filter be disabled without having to explain the reason for their request.
Many legal scholars believe that a number of legal cases, in particular Reno v. American Civil Liberties Union, established that the use of content-control software in libraries is a violation of the First Amendment. The Children's Internet Protection Act [CIPA] and the June 2003 case United States v. American Library Association found CIPA constitutional as a condition placed on the receipt of federal funding, stating that First Amendment concerns were dispelled by the law's provision that allowed adult library users to have the filtering software disabled, without having to explain the reasons for their request. The plurality decision left open a future "as-applied" Constitutional challenge, however.
In November 2006, a lawsuit was filed against the North Central Regional Library District (NCRL) in Washington State for its policy of refusing to disable restrictions upon requests of adult patrons, but CIPA was not challenged in that matter. In May 2010, the Washington State Supreme Court provided an opinion after it was asked to certify a question referred by the United States District Court for the Eastern District of Washington: "Whether a public library, consistent with Article I, § 5 of the Washington Constitution, may filter Internet access for all patrons without disabling Web sites containing constitutionally-protected speech upon the request of an adult library patron." The Washington State Supreme Court ruled that NCRL's internet filtering policy did not violate Article I, Section 5 of the Washington State Constitution. The Court said: "It appears to us that NCRL's filtering policy is reasonable and accords with its mission and these policies and is viewpoint neutral. It appears that no article I, section 5 content-based violation exists in this case. NCRL's essential mission is to promote reading and lifelong learning. As NCRL maintains, it is reasonable to impose restrictions on Internet access in order to maintain an environment that is conducive to study and contemplative thought." The case returned to federal court.
In March 2007, Virginia passed a law similar to CIPA that requires public libraries receiving state funds to use content-control software. Like CIPA, the law requires libraries to disable filters for an adult library user when requested to do so by the user.
Content filtering in general can "be bypassed entirely by tech-savvy individuals." Blocking content on a device "[will not]…guarantee that users won't eventually be able to find a way around the filter."
Some software may be bypassed successfully by using alternative protocols such as FTP or telnet or HTTPS, conducting searches in a different language, using a proxy server or a circumventor such as Psiphon. Also cached web pages returned by Google or other searches could bypass some controls as well. Web syndication services may provide alternate paths for content. Some of the more poorly designed programs can be shut down by killing their processes: for example, in Microsoft Windows through the Windows Task Manager, or in Mac OS X using Force Quit or Activity Monitor. Numerous workarounds and counters to workarounds from content-control software creators exist. Google services are often blocked by filters, but these may most often be bypassed by using https:// in place of http:// since content filtering software is not able to interpret content under secure connections (in this case SSL).
An encrypted VPN can be used as means of bypassing content control software, especially if the content control software is installed on an Internet gateway or firewall.
Other ways to bypass a content control filter include translation sites and establishing a remote connection with an uncensored device.
Some ISPs offer parental control options. Some offer security software which includes parental controls. Mac OS X v10.4 offers parental controls for several applications (Mail, Finder, iChat, Safari & Dictionary). Microsoft's Windows Vista operating system also includes content-control software.
Content filtering technology exists in two major forms: application gateway or packet inspection. For HTTP access the application gateway is called a web-proxy or just a proxy. Such web-proxies can inspect both the initial request and the returned web page using arbitrarily complex rules and will not return any part of the page to the requester until a decision is made. In addition they can make substitutions in whole or for any part of the returned result. Packet inspection filters do not initially interfere with the connection to the server but inspect the data in the connection as it goes past, at some point the filter may decide that the connection is to be filtered and it will then disconnect it by injecting a TCP-Reset or similar faked packet. The two techniques can be used together with the packet filter monitoring a link until it sees an HTTP connection starting to an IP address that has content that needs filtering. The packet filter then redirects the connection to the web-proxy which can perform detailed filtering on the website without having to pass through all unfiltered connections. This combination is quite popular because it can significantly reduce the cost of the system.
Gateway-based content control software may be more difficult to bypass than desktop software as the user does not have physical access to the filtering device. However, many of the techniques in the Bypassing filters section still work. | [
{
"paragraph_id": 0,
"text": "An Internet filter is software that restricts or controls the content an Internet user is capable to access, especially when utilized to restrict material delivered over the Internet via the Web, Email, or other means. Content-control software determines what content will be available or be blocked.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Such restrictions can be applied at various levels: a government can attempt to apply them nationwide (see Internet censorship), or they can, for example, be applied by an Internet service provider to its clients, by an employer to its personnel, by a school to its students, by a library to its visitors, by a parent to a child's computer, or by an individual users to their own computers.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The motive is often to prevent access to content which the computer's owner(s) or other authorities may consider objectionable. When imposed without the consent of the user, content control can be characterised as a form of internet censorship. Some content-control software includes time control functions that empowers parents to set the amount of time that child may spend accessing the Internet or playing games or other computer activities.",
"title": ""
},
{
"paragraph_id": 3,
"text": "In some countries, such software is ubiquitous. In Cuba, if a computer user at a government-controlled Internet cafe types certain words, the word processor or web browser is automatically closed, and a \"state security\" warning is given.",
"title": ""
},
{
"paragraph_id": 4,
"text": "The term \"content control\" is used on occasion by CNN, Playboy magazine, the San Francisco Chronicle, and The New York Times. However, several other terms, including \"content filtering software\", \"filtering proxy servers\", \"secure web gateways\", \"censorware\", \"content security and control\", \"web filtering software\", \"content-censoring software\", and \"content-blocking software\", are often used. \"Nannyware\" has also been used in both product marketing and by the media. Industry research company Gartner uses \"secure web gateway\" (SWG) to describe the market segment.",
"title": "Terminology"
},
{
"paragraph_id": 5,
"text": "Companies that make products that selectively block Web sites do not refer to these products as censorware, and prefer terms such as \"Internet filter\" or \"URL Filter\"; in the specialized case of software specifically designed to allow parents to monitor and restrict the access of their children, \"parental control software\" is also used. Some products log all sites that a user accesses and rates them based on content type for reporting to an \"accountability partner\" of the person's choosing, and the term accountability software is used. Internet filters, parental control software, and/or accountability software may also be combined into one product.",
"title": "Terminology"
},
{
"paragraph_id": 6,
"text": "Those critical of such software, however, use the term \"censorware\" freely: consider the Censorware Project, for example. The use of the term \"censorware\" in editorials criticizing makers of such software is widespread and covers many different varieties and applications: Xeni Jardin used the term in a 9 March 2006 editorial in The New York Times, when discussing the use of American-made filtering software to suppress content in China; in the same month a high school student used the term to discuss the deployment of such software in his school district.",
"title": "Terminology"
},
{
"paragraph_id": 7,
"text": "In general, outside of editorial pages as described above, traditional newspapers do not use the term \"censorware\" in their reporting, preferring instead to use less overtly controversial terms such as \"content filter\", \"content control\", or \"web filtering\"; The New York Times and The Wall Street Journal both appear to follow this practice. On the other hand, Web-based newspapers such as CNET use the term in both editorial and journalistic contexts, for example \"Windows Live to Get Censorware.\"",
"title": "Terminology"
},
{
"paragraph_id": 8,
"text": "Filters can be implemented in many different ways: by software on a personal computer, via network infrastructure such as proxy servers, DNS servers, or firewalls that provide Internet access. No solution provides complete coverage, so most companies deploy a mix of technologies to achieve the proper content control in line with their policies.",
"title": "Types of filtering"
},
{
"paragraph_id": 9,
"text": "The Internet does not intrinsically provide content blocking, and therefore there is much content on the Internet that is considered unsuitable for children, given that much content is given certifications as suitable for adults only, e.g. 18-rated games and movies.",
"title": "Reasons for filtering"
},
{
"paragraph_id": 10,
"text": "Internet service providers (ISPs) that block material containing pornography, or controversial religious, political, or news-related content en route are often utilized by parents who do not permit their children to access content not conforming to their personal beliefs. Content filtering software can, however, also be used to block malware and other content that is or contains hostile, intrusive, or annoying material including adware, spam, computer viruses, worms, trojan horses, and spyware.",
"title": "Reasons for filtering"
},
{
"paragraph_id": 11,
"text": "Most content control software is marketed to organizations or parents. It is, however, also marketed on occasion to facilitate self-censorship, for example by people struggling with addictions to online pornography, gambling, chat rooms, etc. Self-censorship software may also be utilised by some in order to avoid viewing content they consider immoral, inappropriate, or simply distracting. A number of accountability software products are marketed as self-censorship or accountability software. These are often promoted by religious media and at religious gatherings.",
"title": "Reasons for filtering"
},
{
"paragraph_id": 12,
"text": "Utilizing a filter that is overly zealous at filtering content, or mislabels content not intended to be censored can result in over blocking, or over-censoring. Over blocking can filter out material that should be acceptable under the filtering policy in effect, for example health related information may unintentionally be filtered along with porn-related material because of the Scunthorpe problem. Filter administrators may prefer to err on the side of caution by accepting over blocking to prevent any risk of access to sites that they determine to be undesirable. Content-control software was mentioned as blocking access to Beaver College before its name change to Arcadia University. Another example was the filtering of Horniman Museum. As well, over-blocking may encourage users to bypass the filter entirely.",
"title": "Criticism"
},
{
"paragraph_id": 13,
"text": "Whenever new information is uploaded to the Internet, filters can under block, or under-censor, content if the parties responsible for maintaining the filters do not update them quickly and accurately, and a blacklisting rather than a whitelisting filtering policy is in place.",
"title": "Criticism"
},
{
"paragraph_id": 14,
"text": "Many would not be satisfied with government filtering viewpoints on moral or political issues, agreeing that this could become support for propaganda. Many would also find it unacceptable that an ISP, whether by law or by the ISP's own choice, should deploy such software without allowing the users to disable the filtering for their own connections. In the United States, the First Amendment to the United States Constitution has been cited in calls to criminalise forced internet censorship. (See section below)",
"title": "Criticism"
},
{
"paragraph_id": 15,
"text": "In 1998, a United States federal district court in Virginia ruled (Loudoun v. Board of Trustees of the Loudoun County Library) that the imposition of mandatory filtering in a public library violates the First Amendment.",
"title": "Criticism"
},
{
"paragraph_id": 16,
"text": "In 1996 the US Congress passed the Communications Decency Act, banning indecency on the Internet. Civil liberties groups challenged the law under the First Amendment, and in 1997 the Supreme Court ruled in their favor. Part of the civil liberties argument, especially from groups like the Electronic Frontier Foundation, was that parents who wanted to block sites could use their own content-filtering software, making government involvement unnecessary.",
"title": "Criticism"
},
{
"paragraph_id": 17,
"text": "In the late 1990s, groups such as the Censorware Project began reverse-engineering the content-control software and decrypting the blacklists to determine what kind of sites the software blocked. This led to legal action alleging violation of the \"Cyber Patrol\" license agreement. They discovered that such tools routinely blocked unobjectionable sites while also failing to block intended targets.",
"title": "Criticism"
},
{
"paragraph_id": 18,
"text": "Some content-control software companies responded by claiming that their filtering criteria were backed by intensive manual checking. The companies' opponents argued, on the other hand, that performing the necessary checking would require resources greater than the companies possessed and that therefore their claims were not valid.",
"title": "Criticism"
},
{
"paragraph_id": 19,
"text": "The Motion Picture Association successfully obtained a UK ruling enforcing ISPs to use content-control software to prevent copyright infringement by their subscribers.",
"title": "Criticism"
},
{
"paragraph_id": 20,
"text": "Many types of content-control software have been shown to block sites based on the religious and political leanings of the company owners. Examples include blocking several religious sites (including the Web site of the Vatican), many political sites, and homosexuality-related sites. X-Stop was shown to block sites such as the Quaker web site, the National Journal of Sexual Orientation Law, The Heritage Foundation, and parts of The Ethical Spectacle. CYBERsitter blocks out sites like National Organization for Women. Nancy Willard, an academic researcher and attorney, pointed out that many U.S. public schools and libraries use the same filtering software that many Christian organizations use. Cyber Patrol, a product developed by The Anti-Defamation League and Mattel's The Learning Company, has been found to block not only political sites it deems to be engaging in 'hate speech' but also human rights web sites, such as Amnesty International's web page about Israel and gay-rights web sites, such as glaad.org.",
"title": "Criticism"
},
{
"paragraph_id": 21,
"text": "Content labeling may be considered another form of content-control software. In 1994, the Internet Content Rating Association (ICRA) — now part of the Family Online Safety Institute — developed a content rating system for online content providers. Using an online questionnaire a webmaster describes the nature of their web content. A small file is generated that contains a condensed, computer readable digest of this description that can then be used by content filtering software to block or allow that site.",
"title": "Content labeling"
},
{
"paragraph_id": 22,
"text": "ICRA labels come in a variety of formats. These include the World Wide Web Consortium's Resource Description Framework (RDF) as well as Platform for Internet Content Selection (PICS) labels used by Microsoft's Internet Explorer Content Advisor.",
"title": "Content labeling"
},
{
"paragraph_id": 23,
"text": "ICRA labels are an example of self-labeling. Similarly, in 2006 the Association of Sites Advocating Child Protection (ASACP) initiated the Restricted to Adults self-labeling initiative. ASACP members were concerned that various forms of legislation being proposed in the United States were going to have the effect of forcing adult companies to label their content. The RTA label, unlike ICRA labels, does not require a webmaster to fill out a questionnaire or sign up to use. Like ICRA the RTA label is free. Both labels are recognized by a wide variety of content-control software.",
"title": "Content labeling"
},
{
"paragraph_id": 24,
"text": "The Voluntary Content Rating (VCR) system was devised by Solid Oak Software for their CYBERsitter filtering software, as an alternative to the PICS system, which some critics deemed too complex. It employs HTML metadata tags embedded within web page documents to specify the type of content contained in the document. Only two levels are specified, mature and adult, making the specification extremely simple.",
"title": "Content labeling"
},
{
"paragraph_id": 25,
"text": "The Australian Internet Safety Advisory Body has information about \"practical advice on Internet safety, parental control and filters for the protection of children, students and families\" that also includes public libraries.",
"title": "Use in public libraries"
},
{
"paragraph_id": 26,
"text": "NetAlert, the software made available free of charge by the Australian government, was allegedly cracked by a 16-year-old student, Tom Wood, less than a week after its release in August 2007. Wood supposedly bypassed the $84 million filter in about half an hour to highlight problems with the government's approach to Internet content filtering.",
"title": "Use in public libraries"
},
{
"paragraph_id": 27,
"text": "The Australian Government has introduced legislation that requires ISP's to \"restrict access to age restricted content (commercial MA15+ content and R18+ content) either hosted in Australia or provided from Australia\" that was due to commence from 20 January 2008, known as Cleanfeed.",
"title": "Use in public libraries"
},
{
"paragraph_id": 28,
"text": "Cleanfeed is a proposed mandatory ISP level content filtration system. It was proposed by the Beazley led Australian Labor Party opposition in a 2006 press release, with the intention of protecting children who were vulnerable due to claimed parental computer illiteracy. It was announced on 31 December 2007 as a policy to be implemented by the Rudd ALP government, and initial tests in Tasmania have produced a 2008 report. Cleanfeed is funded in the current budget, and is moving towards an Expression of Interest for live testing with ISPs in 2008. Public opposition and criticism have emerged, led by the EFA and gaining irregular mainstream media attention, with a majority of Australians reportedly \"strongly against\" its implementation. Criticisms include its expense, inaccuracy (it will be impossible to ensure only illegal sites are blocked) and the fact that it will be compulsory, which can be seen as an intrusion on free speech rights. Another major criticism point has been that although the filter is claimed to stop certain materials, the underground rings dealing in such materials will not be affected. The filter might also provide a false sense of security for parents, who might supervise children less while using the Internet, achieving the exact opposite effect. Cleanfeed is a responsibility of Senator Conroy's portfolio.",
"title": "Use in public libraries"
},
{
"paragraph_id": 29,
"text": "In Denmark it is stated policy that it will \"prevent inappropriate Internet sites from being accessed from children's libraries across Denmark\". \"'It is important that every library in the country has the opportunity to protect children against pornographic material when they are using library computers. It is a main priority for me as Culture Minister to make sure children can surf the net safely at libraries,' states Brian Mikkelsen in a press-release of the Danish Ministry of Culture.\"",
"title": "Use in public libraries"
},
{
"paragraph_id": 30,
"text": "Many libraries in the UK such as the British Library and local authority public libraries apply filters to Internet access. According to research conducted by the Radical Librarians Collective, at least 98% of public libraries apply filters; including categories such as \"LGBT interest\", \"abortion\" and \"questionable\". Some public libraries block Payday loan websites",
"title": "Use in public libraries"
},
{
"paragraph_id": 31,
"text": "The use of Internet filters or content-control software varies widely in public libraries in the United States, since Internet use policies are established by the local library board. Many libraries adopted Internet filters after Congress conditioned the receipt of universal service discounts on the use of Internet filters through the Children's Internet Protection Act (CIPA). Other libraries do not install content control software, believing that acceptable use policies and educational efforts address the issue of children accessing age-inappropriate content while preserving adult users' right to freely access information. Some libraries use Internet filters on computers used by children only. Some libraries that employ content-control software allow the software to be deactivated on a case-by-case basis on application to a librarian; libraries that are subject to CIPA are required to have a policy that allows adults to request that the filter be disabled without having to explain the reason for their request.",
"title": "Use in public libraries"
},
{
"paragraph_id": 32,
"text": "Many legal scholars believe that a number of legal cases, in particular Reno v. American Civil Liberties Union, established that the use of content-control software in libraries is a violation of the First Amendment. The Children's Internet Protection Act [CIPA] and the June 2003 case United States v. American Library Association found CIPA constitutional as a condition placed on the receipt of federal funding, stating that First Amendment concerns were dispelled by the law's provision that allowed adult library users to have the filtering software disabled, without having to explain the reasons for their request. The plurality decision left open a future \"as-applied\" Constitutional challenge, however.",
"title": "Use in public libraries"
},
{
"paragraph_id": 33,
"text": "In November 2006, a lawsuit was filed against the North Central Regional Library District (NCRL) in Washington State for its policy of refusing to disable restrictions upon requests of adult patrons, but CIPA was not challenged in that matter. In May 2010, the Washington State Supreme Court provided an opinion after it was asked to certify a question referred by the United States District Court for the Eastern District of Washington: \"Whether a public library, consistent with Article I, § 5 of the Washington Constitution, may filter Internet access for all patrons without disabling Web sites containing constitutionally-protected speech upon the request of an adult library patron.\" The Washington State Supreme Court ruled that NCRL's internet filtering policy did not violate Article I, Section 5 of the Washington State Constitution. The Court said: \"It appears to us that NCRL's filtering policy is reasonable and accords with its mission and these policies and is viewpoint neutral. It appears that no article I, section 5 content-based violation exists in this case. NCRL's essential mission is to promote reading and lifelong learning. As NCRL maintains, it is reasonable to impose restrictions on Internet access in order to maintain an environment that is conducive to study and contemplative thought.\" The case returned to federal court.",
"title": "Use in public libraries"
},
{
"paragraph_id": 34,
"text": "In March 2007, Virginia passed a law similar to CIPA that requires public libraries receiving state funds to use content-control software. Like CIPA, the law requires libraries to disable filters for an adult library user when requested to do so by the user.",
"title": "Use in public libraries"
},
{
"paragraph_id": 35,
"text": "Content filtering in general can \"be bypassed entirely by tech-savvy individuals.\" Blocking content on a device \"[will not]…guarantee that users won't eventually be able to find a way around the filter.\"",
"title": "Bypassing filters"
},
{
"paragraph_id": 36,
"text": "Some software may be bypassed successfully by using alternative protocols such as FTP or telnet or HTTPS, conducting searches in a different language, using a proxy server or a circumventor such as Psiphon. Also cached web pages returned by Google or other searches could bypass some controls as well. Web syndication services may provide alternate paths for content. Some of the more poorly designed programs can be shut down by killing their processes: for example, in Microsoft Windows through the Windows Task Manager, or in Mac OS X using Force Quit or Activity Monitor. Numerous workarounds and counters to workarounds from content-control software creators exist. Google services are often blocked by filters, but these may most often be bypassed by using https:// in place of http:// since content filtering software is not able to interpret content under secure connections (in this case SSL).",
"title": "Bypassing filters"
},
{
"paragraph_id": 37,
"text": "An encrypted VPN can be used as means of bypassing content control software, especially if the content control software is installed on an Internet gateway or firewall.",
"title": "Bypassing filters"
},
{
"paragraph_id": 38,
"text": "Other ways to bypass a content control filter include translation sites and establishing a remote connection with an uncensored device.",
"title": "Bypassing filters"
},
{
"paragraph_id": 39,
"text": "Some ISPs offer parental control options. Some offer security software which includes parental controls. Mac OS X v10.4 offers parental controls for several applications (Mail, Finder, iChat, Safari & Dictionary). Microsoft's Windows Vista operating system also includes content-control software.",
"title": "Products and services"
},
{
"paragraph_id": 40,
"text": "Content filtering technology exists in two major forms: application gateway or packet inspection. For HTTP access the application gateway is called a web-proxy or just a proxy. Such web-proxies can inspect both the initial request and the returned web page using arbitrarily complex rules and will not return any part of the page to the requester until a decision is made. In addition they can make substitutions in whole or for any part of the returned result. Packet inspection filters do not initially interfere with the connection to the server but inspect the data in the connection as it goes past, at some point the filter may decide that the connection is to be filtered and it will then disconnect it by injecting a TCP-Reset or similar faked packet. The two techniques can be used together with the packet filter monitoring a link until it sees an HTTP connection starting to an IP address that has content that needs filtering. The packet filter then redirects the connection to the web-proxy which can perform detailed filtering on the website without having to pass through all unfiltered connections. This combination is quite popular because it can significantly reduce the cost of the system.",
"title": "Products and services"
},
{
"paragraph_id": 41,
"text": "Gateway-based content control software may be more difficult to bypass than desktop software as the user does not have physical access to the filtering device. However, many of the techniques in the Bypassing filters section still work.",
"title": "Products and services"
}
] | An Internet filter is software that restricts or controls the content an Internet user is capable to access, especially when utilized to restrict material delivered over the Internet via the Web, Email, or other means. Content-control software determines what content will be available or be blocked. Such restrictions can be applied at various levels: a government can attempt to apply them nationwide, or they can, for example, be applied by an Internet service provider to its clients, by an employer to its personnel, by a school to its students, by a library to its visitors, by a parent to a child's computer, or by an individual users to their own computers. The motive is often to prevent access to content which the computer's owner(s) or other authorities may consider objectionable. When imposed without the consent of the user, content control can be characterised as a form of internet censorship. Some content-control software includes time control functions that empowers parents to set the amount of time that child may spend accessing the Internet or playing games or other computer activities. In some countries, such software is ubiquitous. In Cuba, if a computer user at a government-controlled Internet cafe types certain words, the word processor or web browser is automatically closed, and a "state security" warning is given. | 2001-11-17T14:24:20Z | 2023-11-17T04:43:29Z | [
"Template:Main article",
"Template:Needs update",
"Template:See also",
"Template:Reflist",
"Template:Cite news",
"Template:Short description",
"Template:Original research inline",
"Template:Wiktionary",
"Template:Cite web",
"Template:Cite book",
"Template:Censorship"
] | https://en.wikipedia.org/wiki/Internet_filter |
7,145 | Chambered cairn | A chambered cairn is a burial monument, usually constructed during the Neolithic, consisting of a sizeable (usually stone) chamber around and over which a cairn of stones was constructed. Some chambered cairns are also passage-graves. They are found throughout Britain and Ireland, with the largest number in Scotland.
Typically, the chamber is larger than a cist, and will contain a larger number of interments, which are either excarnated bones or inhumations (cremations). Most were situated near a settlement, and served as that community's "graveyard".
During the early Neolithic (4000–3300 BC) architectural forms are highly regionalised with timber and earth monuments predominating in the east and stone-chambered cairns in the west. During the later Neolithic (3300–2500 BC) massive circular enclosures and the use of grooved ware and Unstan ware pottery emerge. Scotland has a particularly large number of chambered cairns; they are found in various different types described below. Along with the excavations of settlements such as Skara Brae, Links of Noltland, Barnhouse, Rinyo and Balfarg and the complex site at Ness of Brodgar these cairns provide important clues to the character of civilization in Scotland in the Neolithic. However the increasing use of cropmarks to identify Neolithic sites in lowland areas has tended to diminish the relative prominence of these cairns.
In the early phases bones of numerous bodies are often found together and it has been argued that this suggests that in death at least, the status of individuals was played down. During the late Neolithic henge sites were constructed and single burials began to become more commonplace; by the Bronze Age it is possible that even where chambered cairns were still being built they had become the burial places of prominent individuals rather than of communities as a whole.
The Clyde or Clyde-Carlingford type are principally found in northern and western Ireland and southwestern Scotland. They first were identified as a separate group in the Firth of Clyde region, hence the name. Over 100 have been identified in Scotland alone. Lacking a significant passage, they are a form of gallery grave. The burial chamber is normally located at one end of a rectangular or trapezoidal cairn, while a roofless, semi-circular forecourt at the entrance provided access from the outside (although the entrance itself was often blocked), and gives this type of chambered cairn its alternate name of court tomb or court cairn. These forecourts are typically fronted by large stones and it is thought the area in front of the cairn was used for public rituals of some kind. The chambers were created from large stones set on end, roofed with large flat stones and often sub-divided by slabs into small compartments. They are generally considered to be the earliest in Scotland.
Examples include Cairn Holy I and Cairn Holy II near Newton Stewart, a cairn at Port Charlotte, Islay, which dates to 3900–4000 BC, and Monamore, or Meallach's Grave, Arran, which may date from the early fifth millennium BC. Excavations at the Mid Gleniron cairns near Cairnholy revealed a multi-period construction which shed light on the development of this class of chambered cairn.
The Orkney-Cromarty group is by far the largest and most diverse. It has been subdivided into Yarrows, Camster and Cromarty subtypes but the differences are extremely subtle. The design is of dividing slabs at either side of a rectangular chamber, separating it into compartments or stalls. The number of these compartments ranges from 4 in the earliest examples to over 24 in an extreme example on Orkney. The actual shape of the cairn varies from simple circular designs to elaborate 'forecourts' protruding from each end, creating what look like small amphitheatres. It is likely that these are the result of cultural influences from mainland Europe, as they are similar to designs found in France and Spain.
Examples include Midhowe on Rousay, and both the Unstan Chambered Cairn and Wideford Hill chambered cairn from the Orkney Mainland, both of which date from the mid 4th millennium BC and were probably in use over long periods of time. When the latter was excavated in 1884, grave goods were found that gave their name to Unstan ware pottery. Blackhammer cairn on Rousay is another example dating from the 3rd millennium BC.
The Grey Cairns of Camster in Caithness are examples of this type from mainland Scotland. The Tomb of the Eagles on South Ronaldsay is a stalled cairn that shows some similarities with the later Maeshowe type. It was in use for 800 years or more and numerous bird bones were found here, predominantly white-tailed sea eagle.
The Maeshowe group, named after the famous Orkney monument, is among the most elaborate. They appear relatively late and only in Orkney and it is not clear why the use of cairns continued in the north when their construction had largely ceased elsewhere in Scotland. They consist of a central chamber from which lead small compartments, into which burials would be placed. The central chambers are tall and steep-sided and have corbelled roofing faced with high quality stone.
In addition to Maeshowe itself, which was constructed c. 2700 BC, there are various other examples from the Orkney Mainland. These include Quanterness chambered cairn (3250 BC) in which the remains of 157 individuals were found when excavated in the 1970s, Cuween Hill near Finstown which was found to contain the bones of men, dogs and oxen and Wideford Hill chambered cairn, which dates from 2000 BC.
Examples from elsewhere in Orkney are the Vinquoy chambered cairn, and the Huntersquoy chambered cairn, both found on the north end of the island of Eday and Quoyness on Sanday constructed about 2900 BC and which is surrounded by an arc of Bronze Age mounds. The central chamber of Holm of Papa Westray South cairn is over 20 metres long.
The Bookan type is named after a cairn found to the north-west of the Ring of Brodgar in Orkney, which is now a dilapidated oval mound, about 16 metres in diameter. Excavations in 1861 indicated a rectangular central chamber surrounded by five smaller chambers. Because of the structure's unusual design, it was originally presumed to be an early form. However, later interpretations and further excavation work in 2002 suggested that they have more in common with the later Maeshowe type rather than the stalled Orkney-Cromarty cairns.
Huntersquoy chambered cairn on Eday is a double storied Orkney–Cromarty type cairn with a Booken-type lower chamber.
The Shetland or Zetland group are relatively small passage graves, that are round or heel-shaped in outline. The whole chamber is cross or trefoil-shaped and there are no smaller individual compartments. An example is to be found on the uninhabited island of Vementry on the north side of the West Mainland, where it appears that the cairn may have originally been circular and its distinctive heel shape added as a secondary development, a process repeated elsewhere in Shetland. This probably served to make the cairn more distinctive and the forecourt area more defined.
Like the Shetland cairn the Hebridean group appear relatively late in the Neolithic. They are largely found in the Outer Hebrides, although a mixture of cairn types are found here. These passage graves are usually larger than the Shetland type and are round or have funnel-shaped forecourts, although a few are long cairns – perhaps originally circular but with later tails added. They often have a polygonal chamber and a short passage to one end of the cairn.
The Rubha an Dùnain peninsula on the island of Skye provides an example from the 2nd or 3rd millennium BC. Barpa Langass on North Uist is the best preserved chambered cairn in the Hebrides.
Bargrennan chambered cairns are a class of passage graves found only in south-west Scotland, in western Dumfries and Galloway and southern Ayrshire. As well as being structurally different from the nearby Clyde cairns, Bargrennan cairns are distinguished by their siting and distribution; they are found in upland, inland areas of Galloway and Ayrshire.
In addition to the increasing prominence of individual burials, during the Bronze Age regional differences in architecture in Scotland became more pronounced. The Clava cairns date from this period, with about 50 cairns of this type in the Inverness area. Corrimony chambered cairn near Drumnadrochit is an example dated to 2000 BC or older. The only surviving evidence of burial was a stain indicating the presence of a single body. The cairn is surrounded by a circle of 11 standing stones. The cairns at Balnuaran of Clava are of a similar date. The largest of three is the north-east cairn, which was partially reconstructed in the 19th century and the central cairn may have been used as a funeral pyre.
Glebe cairn in Kilmartin Glen in Argyll dates from 1700 BC and has two stone cists inside one of which a jet necklace was found during 19th century excavations. There are numerous prehistoric sites in the vicinity including Nether Largie North cairn, which was entirely removed and rebuilt during excavations in 1930.
There are 18 Scheduled Ancient Monuments listed: | [
{
"paragraph_id": 0,
"text": "A chambered cairn is a burial monument, usually constructed during the Neolithic, consisting of a sizeable (usually stone) chamber around and over which a cairn of stones was constructed. Some chambered cairns are also passage-graves. They are found throughout Britain and Ireland, with the largest number in Scotland.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Typically, the chamber is larger than a cist, and will contain a larger number of interments, which are either excarnated bones or inhumations (cremations). Most were situated near a settlement, and served as that community's \"graveyard\".",
"title": ""
},
{
"paragraph_id": 2,
"text": "During the early Neolithic (4000–3300 BC) architectural forms are highly regionalised with timber and earth monuments predominating in the east and stone-chambered cairns in the west. During the later Neolithic (3300–2500 BC) massive circular enclosures and the use of grooved ware and Unstan ware pottery emerge. Scotland has a particularly large number of chambered cairns; they are found in various different types described below. Along with the excavations of settlements such as Skara Brae, Links of Noltland, Barnhouse, Rinyo and Balfarg and the complex site at Ness of Brodgar these cairns provide important clues to the character of civilization in Scotland in the Neolithic. However the increasing use of cropmarks to identify Neolithic sites in lowland areas has tended to diminish the relative prominence of these cairns.",
"title": "Scotland"
},
{
"paragraph_id": 3,
"text": "In the early phases bones of numerous bodies are often found together and it has been argued that this suggests that in death at least, the status of individuals was played down. During the late Neolithic henge sites were constructed and single burials began to become more commonplace; by the Bronze Age it is possible that even where chambered cairns were still being built they had become the burial places of prominent individuals rather than of communities as a whole.",
"title": "Scotland"
},
{
"paragraph_id": 4,
"text": "The Clyde or Clyde-Carlingford type are principally found in northern and western Ireland and southwestern Scotland. They first were identified as a separate group in the Firth of Clyde region, hence the name. Over 100 have been identified in Scotland alone. Lacking a significant passage, they are a form of gallery grave. The burial chamber is normally located at one end of a rectangular or trapezoidal cairn, while a roofless, semi-circular forecourt at the entrance provided access from the outside (although the entrance itself was often blocked), and gives this type of chambered cairn its alternate name of court tomb or court cairn. These forecourts are typically fronted by large stones and it is thought the area in front of the cairn was used for public rituals of some kind. The chambers were created from large stones set on end, roofed with large flat stones and often sub-divided by slabs into small compartments. They are generally considered to be the earliest in Scotland.",
"title": "Scotland"
},
{
"paragraph_id": 5,
"text": "Examples include Cairn Holy I and Cairn Holy II near Newton Stewart, a cairn at Port Charlotte, Islay, which dates to 3900–4000 BC, and Monamore, or Meallach's Grave, Arran, which may date from the early fifth millennium BC. Excavations at the Mid Gleniron cairns near Cairnholy revealed a multi-period construction which shed light on the development of this class of chambered cairn.",
"title": "Scotland"
},
{
"paragraph_id": 6,
"text": "The Orkney-Cromarty group is by far the largest and most diverse. It has been subdivided into Yarrows, Camster and Cromarty subtypes but the differences are extremely subtle. The design is of dividing slabs at either side of a rectangular chamber, separating it into compartments or stalls. The number of these compartments ranges from 4 in the earliest examples to over 24 in an extreme example on Orkney. The actual shape of the cairn varies from simple circular designs to elaborate 'forecourts' protruding from each end, creating what look like small amphitheatres. It is likely that these are the result of cultural influences from mainland Europe, as they are similar to designs found in France and Spain.",
"title": "Scotland"
},
{
"paragraph_id": 7,
"text": "Examples include Midhowe on Rousay, and both the Unstan Chambered Cairn and Wideford Hill chambered cairn from the Orkney Mainland, both of which date from the mid 4th millennium BC and were probably in use over long periods of time. When the latter was excavated in 1884, grave goods were found that gave their name to Unstan ware pottery. Blackhammer cairn on Rousay is another example dating from the 3rd millennium BC.",
"title": "Scotland"
},
{
"paragraph_id": 8,
"text": "The Grey Cairns of Camster in Caithness are examples of this type from mainland Scotland. The Tomb of the Eagles on South Ronaldsay is a stalled cairn that shows some similarities with the later Maeshowe type. It was in use for 800 years or more and numerous bird bones were found here, predominantly white-tailed sea eagle.",
"title": "Scotland"
},
{
"paragraph_id": 9,
"text": "The Maeshowe group, named after the famous Orkney monument, is among the most elaborate. They appear relatively late and only in Orkney and it is not clear why the use of cairns continued in the north when their construction had largely ceased elsewhere in Scotland. They consist of a central chamber from which lead small compartments, into which burials would be placed. The central chambers are tall and steep-sided and have corbelled roofing faced with high quality stone.",
"title": "Scotland"
},
{
"paragraph_id": 10,
"text": "In addition to Maeshowe itself, which was constructed c. 2700 BC, there are various other examples from the Orkney Mainland. These include Quanterness chambered cairn (3250 BC) in which the remains of 157 individuals were found when excavated in the 1970s, Cuween Hill near Finstown which was found to contain the bones of men, dogs and oxen and Wideford Hill chambered cairn, which dates from 2000 BC.",
"title": "Scotland"
},
{
"paragraph_id": 11,
"text": "Examples from elsewhere in Orkney are the Vinquoy chambered cairn, and the Huntersquoy chambered cairn, both found on the north end of the island of Eday and Quoyness on Sanday constructed about 2900 BC and which is surrounded by an arc of Bronze Age mounds. The central chamber of Holm of Papa Westray South cairn is over 20 metres long.",
"title": "Scotland"
},
{
"paragraph_id": 12,
"text": "The Bookan type is named after a cairn found to the north-west of the Ring of Brodgar in Orkney, which is now a dilapidated oval mound, about 16 metres in diameter. Excavations in 1861 indicated a rectangular central chamber surrounded by five smaller chambers. Because of the structure's unusual design, it was originally presumed to be an early form. However, later interpretations and further excavation work in 2002 suggested that they have more in common with the later Maeshowe type rather than the stalled Orkney-Cromarty cairns.",
"title": "Scotland"
},
{
"paragraph_id": 13,
"text": "Huntersquoy chambered cairn on Eday is a double storied Orkney–Cromarty type cairn with a Booken-type lower chamber.",
"title": "Scotland"
},
{
"paragraph_id": 14,
"text": "The Shetland or Zetland group are relatively small passage graves, that are round or heel-shaped in outline. The whole chamber is cross or trefoil-shaped and there are no smaller individual compartments. An example is to be found on the uninhabited island of Vementry on the north side of the West Mainland, where it appears that the cairn may have originally been circular and its distinctive heel shape added as a secondary development, a process repeated elsewhere in Shetland. This probably served to make the cairn more distinctive and the forecourt area more defined.",
"title": "Scotland"
},
{
"paragraph_id": 15,
"text": "Like the Shetland cairn the Hebridean group appear relatively late in the Neolithic. They are largely found in the Outer Hebrides, although a mixture of cairn types are found here. These passage graves are usually larger than the Shetland type and are round or have funnel-shaped forecourts, although a few are long cairns – perhaps originally circular but with later tails added. They often have a polygonal chamber and a short passage to one end of the cairn.",
"title": "Scotland"
},
{
"paragraph_id": 16,
"text": "The Rubha an Dùnain peninsula on the island of Skye provides an example from the 2nd or 3rd millennium BC. Barpa Langass on North Uist is the best preserved chambered cairn in the Hebrides.",
"title": "Scotland"
},
{
"paragraph_id": 17,
"text": "Bargrennan chambered cairns are a class of passage graves found only in south-west Scotland, in western Dumfries and Galloway and southern Ayrshire. As well as being structurally different from the nearby Clyde cairns, Bargrennan cairns are distinguished by their siting and distribution; they are found in upland, inland areas of Galloway and Ayrshire.",
"title": "Scotland"
},
{
"paragraph_id": 18,
"text": "In addition to the increasing prominence of individual burials, during the Bronze Age regional differences in architecture in Scotland became more pronounced. The Clava cairns date from this period, with about 50 cairns of this type in the Inverness area. Corrimony chambered cairn near Drumnadrochit is an example dated to 2000 BC or older. The only surviving evidence of burial was a stain indicating the presence of a single body. The cairn is surrounded by a circle of 11 standing stones. The cairns at Balnuaran of Clava are of a similar date. The largest of three is the north-east cairn, which was partially reconstructed in the 19th century and the central cairn may have been used as a funeral pyre.",
"title": "Scotland"
},
{
"paragraph_id": 19,
"text": "Glebe cairn in Kilmartin Glen in Argyll dates from 1700 BC and has two stone cists inside one of which a jet necklace was found during 19th century excavations. There are numerous prehistoric sites in the vicinity including Nether Largie North cairn, which was entirely removed and rebuilt during excavations in 1930.",
"title": "Scotland"
},
{
"paragraph_id": 20,
"text": "There are 18 Scheduled Ancient Monuments listed:",
"title": "Wales"
}
] | A chambered cairn is a burial monument, usually constructed during the Neolithic, consisting of a sizeable chamber around and over which a cairn of stones was constructed. Some chambered cairns are also passage-graves. They are found throughout Britain and Ireland, with the largest number in Scotland. Typically, the chamber is larger than a cist, and will contain a larger number of interments, which are either excarnated bones or inhumations (cremations). Most were situated near a settlement, and served as that community's "graveyard". | 2001-11-17T14:31:16Z | 2023-10-12T02:00:29Z | [
"Template:Main",
"Template:Center",
"Template:Cite web",
"Template:Cite journal",
"Template:Webarchive",
"Template:Haswell-Smith",
"Template:Prehistoric Scotland",
"Template:Short description",
"Template:Clear",
"Template:Reflist",
"Template:Cite book",
"Template:ISSN",
"Template:ISBN"
] | https://en.wikipedia.org/wiki/Chambered_cairn |
7,147 | Canadian whisky | Canadian whisky is a type of whisky produced in Canada. Most Canadian whiskies are blended multi-grain liquors containing a large percentage of corn spirits, and are typically lighter and smoother than other whisky styles. When Canadian distillers began adding small amounts of highly-flavourful rye grain to their mashes, people began demanding this new rye-flavoured whisky, referring to it simply as "rye". Today, as for the past two centuries, the terms "rye whisky" and "Canadian whisky" are used interchangeably in Canada and (as defined in Canadian law) refer to exactly the same product, which generally is made with only a small amount of rye grain.
Historically, in Canada, corn-based whisky that had some rye grain added to the mash bill to give it more flavour came to be called "rye".
The regulations under Canada's Food and Drugs Act stipulate the minimum conditions that must be met in order to label a product as "Canadian Whisky" or "Canadian Rye Whisky" (or "Rye Whisky")—these are also upheld internationally through geographical indication agreements. These regulations state that whisky must "be mashed, distilled and aged in Canada", "be aged in small wood vessels for not less than three years", "contain not less than 40 per cent alcohol by volume" and "may contain caramel and flavouring". Within these parameters Canadian whiskies can vary considerably, especially with the allowance of "flavouring"—though the additional requirement that they "possess the aroma, taste and character generally attributed to Canadian whisky" can act as a limiting factor.
Canadian whiskies are most typically blends of whiskies made from a single grain, principally corn and rye, but also sometimes wheat or barley. Mash bills of multiple grains may also be used for some flavouring whiskies. The availability of inexpensive American corn, with its higher proportion of usable starches relative to other cereal grains, has led it to be most typically used to create base whiskies to which flavouring whiskies are blended in. Exceptions to this include the Highwood Distillery which specializes in using wheat and the Alberta Distillers which developed its own proprietary yeast strain that specializes in distilling rye. The flavouring whiskies are most typically rye whiskies, blended into the product to add most of its flavour and aroma. While Canadian whisky may be labelled as a "rye whisky" this blending technique only necessitates a small percentage (such as 10%) of rye to create the flavour, whereas much more rye would be required if it were added to a mash bill alongside the more readily distilled corn.
The base whiskies are distilled to between 180 and 190 proof which results in few congener by-products (such as fusel alcohol, aldehydes, esters, etc.) and creates a lighter taste. By comparison, an American whisky distilled any higher than 160 proof is labelled as "light whiskey". The flavouring whiskies are distilled to a lower proof so that they retain more of the grain's flavour. The relative lightness created by the use of base whiskies makes Canadian whisky useful for mixing into cocktails and highballs. The minimum three year aging in small wood barrels applies to all whiskies used in the blend. As the regulations do not limit the specific type of wood that must be used, a variety of flavours can be achieved by blending whiskies aged in different types of barrels. In addition to new wood barrels, charred or uncharred, flavour can be added by aging whiskies in previously used bourbon or fortified wine barrels for different lengths of time.
In the 18th and early 19th centuries, gristmills distilled surplus grains to avoid spoilage. Most of these early whiskies would have been rough, mostly unaged wheat whiskey. Distilling methods and technologies were brought to Canada by American and European immigrants with experience in distilling wheat and rye. This early whisky from improvised stills, often with the grains closest to spoilage, was produced with various, uncontrolled proofs and was consumed, unaged, by the local market. While most distilling capacity was taken up producing rum, a result of Atlantic Canada's position in the British sugar trade, the first commercial scale production of whisky in Canada began in 1801 when John Molson purchased a copper pot still, previously used to produce rum, in Montreal. With his son Thomas Molson, and eventually partner James Morton, the Molsons operated a distillery in Montreal and Kingston and were the first in Canada to export whisky, benefiting from Napoleonic Wars' disruption in supplying French wine and brandies to England.
Gooderham and Worts began producing whisky in 1837 in Toronto as a side business to their wheat milling but surpassed Molson's production by the 1850s as it expanded their operations with a new distillery in what would become the Distillery District. Henry Corby started distilling whisky as a side business from his gristmill in 1859 in what became known as Corbyville and Joseph Seagram began working in his father-in-law's Waterloo flour mill and distillery in 1864, which he would eventually purchase in 1883. Meanwhile, Americans Hiram Walker and J.P. Wiser moved to Canada: Walker to Windsor in 1858 to open a flour mill and distillery and Wiser to Prescott in 1857 to work at his uncle's distillery where he introduced a rye whisky and was successful enough to buy the distillery five years later. The disruption of American Civil War created an export opportunity for Canadian-made whiskies and their quality, particularly those from Walker and Wiser who had already begun the practice of aging their whiskies, sustained that market even after post-war tariffs were introduced. In the 1880s, Canada's National Policy placed high tariffs on foreign alcoholic products as whisky began to be sold in bottles and the federal government instituted a bottled in bond program that provided certification of the time a whisky spent aging and allowed deferral of taxes for that period, which encouraged aging. In 1890 Canada became the first country to enact an aging law for whiskies, requiring them to be aged at least two years. The growing temperance movement culminated in prohibition in 1916 and distilleries had to either specialize in the export market or switch to alternative products, like industrial alcohols which were in demand in support of the war effort.
With the deferred revenue and storage costs of the Aging Law acting as a barrier to new entrants and the reduced market due to prohibition, consolidation of Canadian whisky had begun. Henry Corby Jr. modernized and expanded upon his father's distillery and sold it, in 1905, to businessman Mortimer Davis who also purchased the Wiser distillery, in 1918, from the heirs of J.P. Wiser. Davis's salesman Harry Hatch spent time promoting the Corby and Wiser brands and developing a distribution network in the United States which held together as Canadian prohibition ended and American prohibition began. After Hatch's falling out with Davis, Hatch purchased the struggling Gooderham and Worts in 1923 and switched out Davis's whisky for his. Hatch was successful enough to be able to also purchase the Walker distillery, and the popular Canadian Club brand, from Hiram's grandsons in 1926. While American prohibition created risk and instability in the Canadian whisky industry, some benefited from purchasing unused American distillation equipment and from sales to exporters (nominally to foreign countries like Saint Pierre and Miquelon, though actually to bootleggers to the United States). Along with Hatch, the Bronfman family was able to profit from making whisky destined for United States during prohibition, though mostly in Western Canada and were able to open a distillery in LaSalle, Quebec and merge their company, in 1928, with Seagram's which had struggled with transitioning to the prohibition marketplace. Samuel Bronfman became president of the company and, with his dominant personality, began a strategy of increasing their capacity and aging whiskies in anticipation of the end of prohibition. When that did occur, in 1933, Seagram's was in a position to quickly expand; they purchased The British Columbia Distilling Company from the Riefel family in 1935, as well as several American distilleries and introduced new brands, one of them being Crown Royal, in 1939, which would eventually become one of the best-selling Canadian whiskies.
While some capacity was switched to producing industrial alcohols in support of the country's World War II efforts, the industry expanded again after the war until the 1980s. In 1945, Schenley Industries purchased one of those industrial alcohol distilleries in Valleyfield, Quebec, and repurposed several defunct American whiskey brands, like Golden Wedding, Old Fine Copper, and starting in 1972, Gibson's Finest. Seeking to secure their supply of Canadian whisky, Barton Brands also built a new distillery in Collingwood, Ontario, in 1967, where they would produce Canadian Mist, though they sold the distillery and brand only four years later to Brown–Forman. As proximity to the shipping routes (by rail and boat) to the US became less important, large distilleries were established in Alberta and Manitoba. Five years after starting to experiment with whiskies in their Toronto gin distillery, W. & A. Gilbey Ltd. created the Black Velvet blend in 1951 which was so successful a new distillery in Lethbridge, Alberta was constructed in 1973 to produce it.
Also in the west, a Calgary-based business group recruited the Riefels from British Columbia to oversee their Alberta Distillers operations in 1948. The company became an innovator in the practice of bulk shipping whiskies to the United States for bottling and the success of their Windsor Canadian brand (produced in Alberta but bottled in the United States) led National Distillers Limited to purchase Alberta Distillers, in 1964, to secure their supply chain. More Alberta investors founded the Highwood Distillery in 1974 in High River, Alberta, which specialized in wheat-based whiskies. Seagram's opened a large, new plant in Gimli, Manitoba, in 1969, which would eventually replace their Waterloo and LaSalle distilleries. In British Columbia, Ernie Potter who had been producing fruit liqueurs from alcohols distilled at Alberta Distillers built his own whisky distillery in Langley in 1958 and produced the Potter's and Century brands of whisky. Hiram Walker's built the Okanagan Distillery in Winfield, British Columbia, in 1970 with the intention of producing Canadian Club but was redirected to fulfill contracts to produce whiskies for Suntory before being closed in 1995.
After decades of expansion, a shift in consumer preferences towards white spirits (such as vodka) in the American market resulted in an excess supply of Canadian whiskies. While this allowed the whiskies to be aged longer, the unexpected storage costs and deferred revenue strained individual companies. With the distillers seeking investors and multinational corporations seeking value brands, a series of acquisitions and mergers occurred. Alberta Distillers was bought in 1987 by Fortune Brands which would go on to become part of Beam Suntory. Hiram Walker was sold in 1987 to Allied Lyons which Pernod Ricard took over in 2006, with Fortune Brands acquiring the Canadian Club brand. Grand Metropolitan had purchased Black Velvet in 1972 but sold the brand in 1999 to Constellation Brands who in turn sold it to Heaven Hill in 2019. Schenley was acquired in 1990 by United Distillers which would go on to become part of Diageo, though Gibson's Finest was sold to William Grant & Sons in 2001. Seagram's was sold in 2000 to Vivendi, which in turn sold its various brands and distilleries to Pernod Ricard and Diageo. Highwood would purchase Potter's in 2006. Despite the consolidation, the Kittling Ridge Distillery in Grimsby, Ontario, began to produce the Forty Creek brand, though it was sold to the Campari Group in 2014. Later, the Sazerac Company would purchase the brands Seagram's VO, Canadian 83 and Five Star from Diageo in 2018.
Canadian whisky featured prominently in rum-running into the U.S. during Prohibition. Hiram Walker's distillery in Windsor, Ontario, directly across the Detroit River and the international boundary between Canada and the United States, easily served bootleggers using small, fast smuggling boats.
The following is a listing of distilleries presently producing Canadian whiskys:
There are several distilleries based in Alberta, including the Alberta Distillers, established in 1946 in Calgary, Alberta. The distillery was purchased in 1987 by Fortune Brands which became Beam Suntory in 2011. The distillery uses a specific strain of yeast which they developed that specializes in fermenting rye. While the distillery exports much of its whisky for bottling in other countries, they also produce the brands Alberta Premium, Alberta Springs, Windsor Canadian, Tangle Ridge, and Canadian Club Chairman's Select.
Black Velvet Distillery (formerly the Palliser Distillery) was established in 1973 in Lethbridge, Alberta it has been owned by Heaven Hill since 2019. They produce the Black Velvet brand which is mostly shipped in bulk for bottling in the American market, with some bottled onsite for the Canadian market. The distillery also produces Danfield's and the Schenley's Golden Wedding and OFC labels.
Highwood Distillery (formerly the Sunnyvale Distillery) was established in 1974 in High River, Alberta, the Highwood Distillery specializes in using wheat in their base whiskies. This distillery also produces vodka, rum, gin and liqueurs. Brands of Canadian whisky produced at the Highwood Distillery include Centennial, Century, Ninety, and Potter's. They also produce White Owl whisky which is charcoal-filtered to remove the colouring introduced by aging in wood barrels.
Gimli Distillery was established in 1968 in Gimli, Manitoba, to produce Seagram brands, the distillery was acquired by Diageo in 2001. The Gimli Distillery is responsible for producing Crown Royal, the best-selling Canadian whisky in the world with 7 million cases shipped in 2017. They also supply some of the whisky used in Seagram's VO and other blends.
Distilleries were established in Ontario during the mid-19th century, with Gooderham and Worts's beginning operations in Toronto's Distillery District in the 1830s. Distilleries continued to operate from the Distillery District until 1990, when the area was reoriented towards commercial and residential development. Other former distilleries in the province includes one in Corbyville, which hosted a distillery operated by Corby Spirit and Wine. A distillery in Waterloo was operated by Seagram to produce Crown Royal until 1992; although the company still maintains a blending and bottling plant in Amherstburg.
Presently, there are several major distilleries based in Ontario. The oldest functioning distillery in Ontario is the Hiram Walker Distillery, established in 1858 in Windsor, Ontario, but modernized and expanded upon several times since. The distillery is owned by Pernod Ricard and operated by Corby Spirit and Wine, of which Pernod has a controlling share. Brands produced at the Walker Distillery include Lot 40, Pike Creek, Gooderham and Worts, Hiram Walker's Special Old, Corby's Royal Reserve, and J.P. Wiser's brands. Most of its capacity is used for contract production of the Beam Suntory brand (and former Hiram Walker brand) Canadian Club, in addition to generic Canadian whisky that is exported in bulk and bottled under various labels in other countries.
Canadian Mist Distillery was established in 1967 in Collingwood, Ontario, the distillery is owned by the Sazerac Company and primarily produces the Canadian Mist brand for export. The distillery also produces whiskies used in the Collingwood brand, introduced 2011, and the Bearface brand, introduced 2018.
Kittling Ridge Distillery was established in 1992 with an associated winery in Grimsby, Ontario, its first whiskies came to market in 2002. The distillery was purchased in 2014 by Campari Group. The distillery produces the Forty Creek brand.
Old Montreal Distillery was established in 1929 as a Corby Spirit and Wine distillery, it was acquired by Sazerac Company in 2011 and modernized in 2018. It produces Sazerac brands and has taken over bottling of Caribou Crossing.
Valleyfield Distillery (formerly the Schenley Distillery) was established in 1945 in a former brewery in Salaberry-de-Valleyfield, Quebec, near Montreal, the distillery has been owned by Diageo in 2008. Seagram's VO is bottled here with flavouring whisky from the Gimli Distillery. Otherwise, the Valleyfield Distillery specializes in producing base whiskies distilled from corn for other Diageo products. | [
{
"paragraph_id": 0,
"text": "Canadian whisky is a type of whisky produced in Canada. Most Canadian whiskies are blended multi-grain liquors containing a large percentage of corn spirits, and are typically lighter and smoother than other whisky styles. When Canadian distillers began adding small amounts of highly-flavourful rye grain to their mashes, people began demanding this new rye-flavoured whisky, referring to it simply as \"rye\". Today, as for the past two centuries, the terms \"rye whisky\" and \"Canadian whisky\" are used interchangeably in Canada and (as defined in Canadian law) refer to exactly the same product, which generally is made with only a small amount of rye grain.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Historically, in Canada, corn-based whisky that had some rye grain added to the mash bill to give it more flavour came to be called \"rye\".",
"title": "Characteristics"
},
{
"paragraph_id": 2,
"text": "The regulations under Canada's Food and Drugs Act stipulate the minimum conditions that must be met in order to label a product as \"Canadian Whisky\" or \"Canadian Rye Whisky\" (or \"Rye Whisky\")—these are also upheld internationally through geographical indication agreements. These regulations state that whisky must \"be mashed, distilled and aged in Canada\", \"be aged in small wood vessels for not less than three years\", \"contain not less than 40 per cent alcohol by volume\" and \"may contain caramel and flavouring\". Within these parameters Canadian whiskies can vary considerably, especially with the allowance of \"flavouring\"—though the additional requirement that they \"possess the aroma, taste and character generally attributed to Canadian whisky\" can act as a limiting factor.",
"title": "Characteristics"
},
{
"paragraph_id": 3,
"text": "Canadian whiskies are most typically blends of whiskies made from a single grain, principally corn and rye, but also sometimes wheat or barley. Mash bills of multiple grains may also be used for some flavouring whiskies. The availability of inexpensive American corn, with its higher proportion of usable starches relative to other cereal grains, has led it to be most typically used to create base whiskies to which flavouring whiskies are blended in. Exceptions to this include the Highwood Distillery which specializes in using wheat and the Alberta Distillers which developed its own proprietary yeast strain that specializes in distilling rye. The flavouring whiskies are most typically rye whiskies, blended into the product to add most of its flavour and aroma. While Canadian whisky may be labelled as a \"rye whisky\" this blending technique only necessitates a small percentage (such as 10%) of rye to create the flavour, whereas much more rye would be required if it were added to a mash bill alongside the more readily distilled corn.",
"title": "Characteristics"
},
{
"paragraph_id": 4,
"text": "The base whiskies are distilled to between 180 and 190 proof which results in few congener by-products (such as fusel alcohol, aldehydes, esters, etc.) and creates a lighter taste. By comparison, an American whisky distilled any higher than 160 proof is labelled as \"light whiskey\". The flavouring whiskies are distilled to a lower proof so that they retain more of the grain's flavour. The relative lightness created by the use of base whiskies makes Canadian whisky useful for mixing into cocktails and highballs. The minimum three year aging in small wood barrels applies to all whiskies used in the blend. As the regulations do not limit the specific type of wood that must be used, a variety of flavours can be achieved by blending whiskies aged in different types of barrels. In addition to new wood barrels, charred or uncharred, flavour can be added by aging whiskies in previously used bourbon or fortified wine barrels for different lengths of time.",
"title": "Characteristics"
},
{
"paragraph_id": 5,
"text": "In the 18th and early 19th centuries, gristmills distilled surplus grains to avoid spoilage. Most of these early whiskies would have been rough, mostly unaged wheat whiskey. Distilling methods and technologies were brought to Canada by American and European immigrants with experience in distilling wheat and rye. This early whisky from improvised stills, often with the grains closest to spoilage, was produced with various, uncontrolled proofs and was consumed, unaged, by the local market. While most distilling capacity was taken up producing rum, a result of Atlantic Canada's position in the British sugar trade, the first commercial scale production of whisky in Canada began in 1801 when John Molson purchased a copper pot still, previously used to produce rum, in Montreal. With his son Thomas Molson, and eventually partner James Morton, the Molsons operated a distillery in Montreal and Kingston and were the first in Canada to export whisky, benefiting from Napoleonic Wars' disruption in supplying French wine and brandies to England.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "Gooderham and Worts began producing whisky in 1837 in Toronto as a side business to their wheat milling but surpassed Molson's production by the 1850s as it expanded their operations with a new distillery in what would become the Distillery District. Henry Corby started distilling whisky as a side business from his gristmill in 1859 in what became known as Corbyville and Joseph Seagram began working in his father-in-law's Waterloo flour mill and distillery in 1864, which he would eventually purchase in 1883. Meanwhile, Americans Hiram Walker and J.P. Wiser moved to Canada: Walker to Windsor in 1858 to open a flour mill and distillery and Wiser to Prescott in 1857 to work at his uncle's distillery where he introduced a rye whisky and was successful enough to buy the distillery five years later. The disruption of American Civil War created an export opportunity for Canadian-made whiskies and their quality, particularly those from Walker and Wiser who had already begun the practice of aging their whiskies, sustained that market even after post-war tariffs were introduced. In the 1880s, Canada's National Policy placed high tariffs on foreign alcoholic products as whisky began to be sold in bottles and the federal government instituted a bottled in bond program that provided certification of the time a whisky spent aging and allowed deferral of taxes for that period, which encouraged aging. In 1890 Canada became the first country to enact an aging law for whiskies, requiring them to be aged at least two years. The growing temperance movement culminated in prohibition in 1916 and distilleries had to either specialize in the export market or switch to alternative products, like industrial alcohols which were in demand in support of the war effort.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "With the deferred revenue and storage costs of the Aging Law acting as a barrier to new entrants and the reduced market due to prohibition, consolidation of Canadian whisky had begun. Henry Corby Jr. modernized and expanded upon his father's distillery and sold it, in 1905, to businessman Mortimer Davis who also purchased the Wiser distillery, in 1918, from the heirs of J.P. Wiser. Davis's salesman Harry Hatch spent time promoting the Corby and Wiser brands and developing a distribution network in the United States which held together as Canadian prohibition ended and American prohibition began. After Hatch's falling out with Davis, Hatch purchased the struggling Gooderham and Worts in 1923 and switched out Davis's whisky for his. Hatch was successful enough to be able to also purchase the Walker distillery, and the popular Canadian Club brand, from Hiram's grandsons in 1926. While American prohibition created risk and instability in the Canadian whisky industry, some benefited from purchasing unused American distillation equipment and from sales to exporters (nominally to foreign countries like Saint Pierre and Miquelon, though actually to bootleggers to the United States). Along with Hatch, the Bronfman family was able to profit from making whisky destined for United States during prohibition, though mostly in Western Canada and were able to open a distillery in LaSalle, Quebec and merge their company, in 1928, with Seagram's which had struggled with transitioning to the prohibition marketplace. Samuel Bronfman became president of the company and, with his dominant personality, began a strategy of increasing their capacity and aging whiskies in anticipation of the end of prohibition. When that did occur, in 1933, Seagram's was in a position to quickly expand; they purchased The British Columbia Distilling Company from the Riefel family in 1935, as well as several American distilleries and introduced new brands, one of them being Crown Royal, in 1939, which would eventually become one of the best-selling Canadian whiskies.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "While some capacity was switched to producing industrial alcohols in support of the country's World War II efforts, the industry expanded again after the war until the 1980s. In 1945, Schenley Industries purchased one of those industrial alcohol distilleries in Valleyfield, Quebec, and repurposed several defunct American whiskey brands, like Golden Wedding, Old Fine Copper, and starting in 1972, Gibson's Finest. Seeking to secure their supply of Canadian whisky, Barton Brands also built a new distillery in Collingwood, Ontario, in 1967, where they would produce Canadian Mist, though they sold the distillery and brand only four years later to Brown–Forman. As proximity to the shipping routes (by rail and boat) to the US became less important, large distilleries were established in Alberta and Manitoba. Five years after starting to experiment with whiskies in their Toronto gin distillery, W. & A. Gilbey Ltd. created the Black Velvet blend in 1951 which was so successful a new distillery in Lethbridge, Alberta was constructed in 1973 to produce it.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "Also in the west, a Calgary-based business group recruited the Riefels from British Columbia to oversee their Alberta Distillers operations in 1948. The company became an innovator in the practice of bulk shipping whiskies to the United States for bottling and the success of their Windsor Canadian brand (produced in Alberta but bottled in the United States) led National Distillers Limited to purchase Alberta Distillers, in 1964, to secure their supply chain. More Alberta investors founded the Highwood Distillery in 1974 in High River, Alberta, which specialized in wheat-based whiskies. Seagram's opened a large, new plant in Gimli, Manitoba, in 1969, which would eventually replace their Waterloo and LaSalle distilleries. In British Columbia, Ernie Potter who had been producing fruit liqueurs from alcohols distilled at Alberta Distillers built his own whisky distillery in Langley in 1958 and produced the Potter's and Century brands of whisky. Hiram Walker's built the Okanagan Distillery in Winfield, British Columbia, in 1970 with the intention of producing Canadian Club but was redirected to fulfill contracts to produce whiskies for Suntory before being closed in 1995.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "After decades of expansion, a shift in consumer preferences towards white spirits (such as vodka) in the American market resulted in an excess supply of Canadian whiskies. While this allowed the whiskies to be aged longer, the unexpected storage costs and deferred revenue strained individual companies. With the distillers seeking investors and multinational corporations seeking value brands, a series of acquisitions and mergers occurred. Alberta Distillers was bought in 1987 by Fortune Brands which would go on to become part of Beam Suntory. Hiram Walker was sold in 1987 to Allied Lyons which Pernod Ricard took over in 2006, with Fortune Brands acquiring the Canadian Club brand. Grand Metropolitan had purchased Black Velvet in 1972 but sold the brand in 1999 to Constellation Brands who in turn sold it to Heaven Hill in 2019. Schenley was acquired in 1990 by United Distillers which would go on to become part of Diageo, though Gibson's Finest was sold to William Grant & Sons in 2001. Seagram's was sold in 2000 to Vivendi, which in turn sold its various brands and distilleries to Pernod Ricard and Diageo. Highwood would purchase Potter's in 2006. Despite the consolidation, the Kittling Ridge Distillery in Grimsby, Ontario, began to produce the Forty Creek brand, though it was sold to the Campari Group in 2014. Later, the Sazerac Company would purchase the brands Seagram's VO, Canadian 83 and Five Star from Diageo in 2018.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "Canadian whisky featured prominently in rum-running into the U.S. during Prohibition. Hiram Walker's distillery in Windsor, Ontario, directly across the Detroit River and the international boundary between Canada and the United States, easily served bootleggers using small, fast smuggling boats.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "The following is a listing of distilleries presently producing Canadian whiskys:",
"title": "Distilleries and brands"
},
{
"paragraph_id": 13,
"text": "There are several distilleries based in Alberta, including the Alberta Distillers, established in 1946 in Calgary, Alberta. The distillery was purchased in 1987 by Fortune Brands which became Beam Suntory in 2011. The distillery uses a specific strain of yeast which they developed that specializes in fermenting rye. While the distillery exports much of its whisky for bottling in other countries, they also produce the brands Alberta Premium, Alberta Springs, Windsor Canadian, Tangle Ridge, and Canadian Club Chairman's Select.",
"title": "Distilleries and brands"
},
{
"paragraph_id": 14,
"text": "Black Velvet Distillery (formerly the Palliser Distillery) was established in 1973 in Lethbridge, Alberta it has been owned by Heaven Hill since 2019. They produce the Black Velvet brand which is mostly shipped in bulk for bottling in the American market, with some bottled onsite for the Canadian market. The distillery also produces Danfield's and the Schenley's Golden Wedding and OFC labels.",
"title": "Distilleries and brands"
},
{
"paragraph_id": 15,
"text": "Highwood Distillery (formerly the Sunnyvale Distillery) was established in 1974 in High River, Alberta, the Highwood Distillery specializes in using wheat in their base whiskies. This distillery also produces vodka, rum, gin and liqueurs. Brands of Canadian whisky produced at the Highwood Distillery include Centennial, Century, Ninety, and Potter's. They also produce White Owl whisky which is charcoal-filtered to remove the colouring introduced by aging in wood barrels.",
"title": "Distilleries and brands"
},
{
"paragraph_id": 16,
"text": "Gimli Distillery was established in 1968 in Gimli, Manitoba, to produce Seagram brands, the distillery was acquired by Diageo in 2001. The Gimli Distillery is responsible for producing Crown Royal, the best-selling Canadian whisky in the world with 7 million cases shipped in 2017. They also supply some of the whisky used in Seagram's VO and other blends.",
"title": "Distilleries and brands"
},
{
"paragraph_id": 17,
"text": "Distilleries were established in Ontario during the mid-19th century, with Gooderham and Worts's beginning operations in Toronto's Distillery District in the 1830s. Distilleries continued to operate from the Distillery District until 1990, when the area was reoriented towards commercial and residential development. Other former distilleries in the province includes one in Corbyville, which hosted a distillery operated by Corby Spirit and Wine. A distillery in Waterloo was operated by Seagram to produce Crown Royal until 1992; although the company still maintains a blending and bottling plant in Amherstburg.",
"title": "Distilleries and brands"
},
{
"paragraph_id": 18,
"text": "Presently, there are several major distilleries based in Ontario. The oldest functioning distillery in Ontario is the Hiram Walker Distillery, established in 1858 in Windsor, Ontario, but modernized and expanded upon several times since. The distillery is owned by Pernod Ricard and operated by Corby Spirit and Wine, of which Pernod has a controlling share. Brands produced at the Walker Distillery include Lot 40, Pike Creek, Gooderham and Worts, Hiram Walker's Special Old, Corby's Royal Reserve, and J.P. Wiser's brands. Most of its capacity is used for contract production of the Beam Suntory brand (and former Hiram Walker brand) Canadian Club, in addition to generic Canadian whisky that is exported in bulk and bottled under various labels in other countries.",
"title": "Distilleries and brands"
},
{
"paragraph_id": 19,
"text": "Canadian Mist Distillery was established in 1967 in Collingwood, Ontario, the distillery is owned by the Sazerac Company and primarily produces the Canadian Mist brand for export. The distillery also produces whiskies used in the Collingwood brand, introduced 2011, and the Bearface brand, introduced 2018.",
"title": "Distilleries and brands"
},
{
"paragraph_id": 20,
"text": "Kittling Ridge Distillery was established in 1992 with an associated winery in Grimsby, Ontario, its first whiskies came to market in 2002. The distillery was purchased in 2014 by Campari Group. The distillery produces the Forty Creek brand.",
"title": "Distilleries and brands"
},
{
"paragraph_id": 21,
"text": "Old Montreal Distillery was established in 1929 as a Corby Spirit and Wine distillery, it was acquired by Sazerac Company in 2011 and modernized in 2018. It produces Sazerac brands and has taken over bottling of Caribou Crossing.",
"title": "Distilleries and brands"
},
{
"paragraph_id": 22,
"text": "Valleyfield Distillery (formerly the Schenley Distillery) was established in 1945 in a former brewery in Salaberry-de-Valleyfield, Quebec, near Montreal, the distillery has been owned by Diageo in 2008. Seagram's VO is bottled here with flavouring whisky from the Gimli Distillery. Otherwise, the Valleyfield Distillery specializes in producing base whiskies distilled from corn for other Diageo products.",
"title": "Distilleries and brands"
}
] | Canadian whisky is a type of whisky produced in Canada. Most Canadian whiskies are blended multi-grain liquors containing a large percentage of corn spirits, and are typically lighter and smoother than other whisky styles. When Canadian distillers began adding small amounts of highly-flavourful rye grain to their mashes, people began demanding this new rye-flavoured whisky, referring to it simply as "rye". Today, as for the past two centuries, the terms "rye whisky" and "Canadian whisky" are used interchangeably in Canada and refer to exactly the same product, which generally is made with only a small amount of rye grain. | 2001-05-16T15:46:05Z | 2023-12-27T21:20:35Z | [
"Template:Cbignore",
"Template:Short description",
"Template:Notelist",
"Template:Reflist",
"Template:Cite web",
"Template:Use Canadian English",
"Template:Efn",
"Template:Alcoholic beverages",
"Template:See also",
"Template:Webarchive",
"Template:Portalbar",
"Template:Whisky",
"Template:Use mdy dates",
"Template:Canadian cuisine",
"Template:Cite book",
"Template:Cite news"
] | https://en.wikipedia.org/wiki/Canadian_whisky |
7,148 | Collective noun | In linguistics, a collective noun is a word referring to a collection of things taken as a whole. Most collective nouns in everyday speech are not specific to one kind of thing. For example, the collective noun "group" can be applied to people ("a group of people"), or dogs ("a group of dogs"), or objects ("a group of stones").
Some collective nouns are specific to one kind of thing, especially terms of venery, which identify groups of specific animals. For example, "pride" as a term of venery always refers to lions, never to dogs or cows. Other examples come from popular culture such as a group of owls, which is called a "parliament".
Different forms of English handle verb agreement with collective count nouns differently. For example, users of British English generally accept that collective nouns take either singular or plural verb forms depending on context and the metonymic shift that it implies.
Morphological derivation accounts for many collective words and various languages have common affixes for denoting collective nouns. Because derivation is a slower and less productive word formation process than the more overtly syntactical morphological methods, there are fewer collectives formed this way. As with all derived words, derivational collectives often differ semantically from the original words, acquiring new connotations and even new denotations.
Early Proto-Indo-European used the suffix *eh₂ to form collective nouns, which evolved into the Latin neuter plural ending -a, as in "datum/data". Late Proto-Indo-European used the ending *t, which evolved into the English ending -th, as in "young/youth".
The English endings -age and -ade often signify a collective. Sometimes, the relationship is easily recognizable: baggage, drainage, blockade. Though the etymology is plain to see, the derived words take on a distinct meaning. This is a productive ending, as evidenced in the recent coin, "signage".
German uses the prefix ge- to create collectives. The root word often undergoes umlaut and suffixation as well as receiving the ge- prefix. Nearly all nouns created in that way are of neuter gender:
There are also several endings that can be used to create collectives, such as "welt" and "masse".
Dutch has a similar pattern but sometimes uses the (unproductive) circumfix ge- -te:
The following Swedish example has different words in the collective form and in the individual form:
Esperanto uses the collective infix -ar- to produce a large number of derived words:
Two examples of collective nouns are "team" and "government", which are both words referring to groups of (usually) people. Both "team" and "government" are countable nouns (consider: "one team", "two teams", "most teams"; "one government", "two governments", "many governments").
Confusion often stems from the way that different forms of English handle agreement with collective nouns—specifically, whether or not to use the collective singular: the singular verb form with a collective noun. The plural verb forms are often used in British English with the singular forms of these countable nouns (e.g., "The team have finished the project."). Conversely, in the English language as a whole, singular verb forms can often be used with nouns ending in "-s" that were once considered plural (e.g., "Physics is my favorite academic subject"). This apparent "number mismatch" is a natural and logical feature of human language, and its mechanism is a subtle metonymic shift in the concepts underlying the words.
In British English, it is generally accepted that collective nouns can take either singular or plural verb forms depending on the context and the metonymic shift that it implies. For example, "the team is in the dressing room" (formal agreement) refers to the team as an ensemble, while "the team are fighting among themselves" (notional agreement) refers to the team as individuals. That is also the British English practice with names of countries and cities in sports contexts (e.g., "Newcastle have won the competition.").
In American English, collective nouns almost always take singular verb forms (formal agreement). In cases that a metonymic shift would be revealed nearby, the whole sentence should be recast to avoid the metonymy. (For example, "The team are fighting among themselves" may become "the team members are fighting among themselves" or simply "The team is infighting.") Collective proper nouns are usually taken as singular ("Apple is expected to release a new phone this year"), unless the plural is explicit in the proper noun itself, in which case it is taken as plural ("The Green Bay Packers are scheduled to play the Minnesota Vikings this weekend"). More explicit examples of collective proper nouns include "General Motors is once again the world's largest producer of vehicles," and "Texas Instruments is a large producer of electronics here," and "British Airways is an airline company in Europe." Furthermore, "American Telephone & Telegraph is a telecommunications company in North America." Such phrases might look plural, but they are not.
A good example of such a metonymic shift in the singular-to-plural direction (which exclusively takes place in British English) is the following sentence: "The team have finished the project." In that sentence, the underlying thought is of the individual members of the team working together to finish the project. Their accomplishment is collective, and the emphasis is not on their individual identities, but they are still discrete individuals; the word choice "team have" manages to convey both their collective and discrete identities simultaneously. Collective nouns that have a singular form but take a plural verb form are called collective plurals. An example of such a metonymic shift in the plural-to-singular direction is the following sentence: "Mathematics is my favorite academic subject." The word "mathematics" may have originally been plural in concept, referring to mathematic endeavors, but metonymic shift (the shift in concept from "the endeavors" to "the whole set of endeavors") produced the usage of "mathematics" as a singular entity taking singular verb forms. (A true mass-noun sense of "mathematics" followed naturally.)
Nominally singular pronouns can be collective nouns taking plural verb forms, according to the same rules that apply to other collective nouns. For example, it is correct usage in both British English and American English usage to say: "None are so fallible as those who are sure they're right." In that case, the plural verb is used because the context for "none" suggests more than one thing or person. This also applies to the use of an adjective as a collective noun: "The British are coming!"; "The poor will always be with you."
Other examples include:
This does not, however, affect the tense later in the sentence:
Abbreviations provide other "exceptions" in American usage concerning plurals:
When only the name is plural but not the object, place, or person:
The tradition of using "terms of venery" or "nouns of assembly", collective nouns that are specific to certain kinds of animals, stems from an English hunting tradition of the Late Middle Ages. The fashion of a consciously developed hunting language came to England from France. It was marked by an extensive proliferation of specialist vocabulary, applying different names to the same feature in different animals. The elements can be shown to have already been part of French and English hunting terminology by the beginning of the 14th century. In the course of the 14th century, it became a courtly fashion to extend the vocabulary, and by the 15th century, the tendency had reached exaggerated and even satirical proportions.
The Treatise, written by Walter of Bibbesworth in the mid-1200s, is the earliest source for collective nouns of animals in any European vernacular (and also the earliest source for animal noises). The Venerie of Twiti (early 14th century) distinguished three types of droppings of animals, and three different terms for herds of animals. Gaston Phoebus (14th century) had five terms for droppings of animals, which were extended to seven in the Master of the Game (early 15th century). The focus on collective terms for groups of animals emerged in the later 15th century. Thus, a list of collective nouns in Egerton MS 1995, dated to c. 1452 under the heading of "termis of venery &c.", extends to 70 items, and the list in the Book of Saint Albans (1486) runs to 164 items, many of which, even though introduced by "the compaynys of beestys and fowlys", relate not to venery but to human groups and professions and are clearly humorous, such as "a Doctryne of doctoris", "a Sentence of Juges", "a Fightyng of beggers", "an uncredibilite of Cocoldis", "a Melody of harpers", "a Gagle of women", "a Disworship of Scottis", etc.
The Book of Saint Albans became very popular during the 16th century and was reprinted frequently. Gervase Markham edited and commented on the list in his The Gentleman's Academie, in 1595. The book's popularity had the effect of perpetuating many of these terms as part of the Standard English lexicon even if they were originally meant to be humorous and have long ceased to have any practical application.
Even in their original context of medieval venery, the terms were of the nature of kennings, intended as a mark of erudition of the gentlemen able to use them correctly rather than for practical communication. The popularity of the terms in the modern period has resulted in the addition of numerous lighthearted, humorous or facetious collective nouns. | [
{
"paragraph_id": 0,
"text": "In linguistics, a collective noun is a word referring to a collection of things taken as a whole. Most collective nouns in everyday speech are not specific to one kind of thing. For example, the collective noun \"group\" can be applied to people (\"a group of people\"), or dogs (\"a group of dogs\"), or objects (\"a group of stones\").",
"title": ""
},
{
"paragraph_id": 1,
"text": "Some collective nouns are specific to one kind of thing, especially terms of venery, which identify groups of specific animals. For example, \"pride\" as a term of venery always refers to lions, never to dogs or cows. Other examples come from popular culture such as a group of owls, which is called a \"parliament\".",
"title": ""
},
{
"paragraph_id": 2,
"text": "Different forms of English handle verb agreement with collective count nouns differently. For example, users of British English generally accept that collective nouns take either singular or plural verb forms depending on context and the metonymic shift that it implies.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Morphological derivation accounts for many collective words and various languages have common affixes for denoting collective nouns. Because derivation is a slower and less productive word formation process than the more overtly syntactical morphological methods, there are fewer collectives formed this way. As with all derived words, derivational collectives often differ semantically from the original words, acquiring new connotations and even new denotations.",
"title": "Derivation"
},
{
"paragraph_id": 4,
"text": "Early Proto-Indo-European used the suffix *eh₂ to form collective nouns, which evolved into the Latin neuter plural ending -a, as in \"datum/data\". Late Proto-Indo-European used the ending *t, which evolved into the English ending -th, as in \"young/youth\".",
"title": "Affixes"
},
{
"paragraph_id": 5,
"text": "The English endings -age and -ade often signify a collective. Sometimes, the relationship is easily recognizable: baggage, drainage, blockade. Though the etymology is plain to see, the derived words take on a distinct meaning. This is a productive ending, as evidenced in the recent coin, \"signage\".",
"title": "Affixes"
},
{
"paragraph_id": 6,
"text": "German uses the prefix ge- to create collectives. The root word often undergoes umlaut and suffixation as well as receiving the ge- prefix. Nearly all nouns created in that way are of neuter gender:",
"title": "Affixes"
},
{
"paragraph_id": 7,
"text": "There are also several endings that can be used to create collectives, such as \"welt\" and \"masse\".",
"title": "Affixes"
},
{
"paragraph_id": 8,
"text": "Dutch has a similar pattern but sometimes uses the (unproductive) circumfix ge- -te:",
"title": "Affixes"
},
{
"paragraph_id": 9,
"text": "The following Swedish example has different words in the collective form and in the individual form:",
"title": "Affixes"
},
{
"paragraph_id": 10,
"text": "Esperanto uses the collective infix -ar- to produce a large number of derived words:",
"title": "Affixes"
},
{
"paragraph_id": 11,
"text": "Two examples of collective nouns are \"team\" and \"government\", which are both words referring to groups of (usually) people. Both \"team\" and \"government\" are countable nouns (consider: \"one team\", \"two teams\", \"most teams\"; \"one government\", \"two governments\", \"many governments\").",
"title": "Metonymic merging of grammatical number"
},
{
"paragraph_id": 12,
"text": "Confusion often stems from the way that different forms of English handle agreement with collective nouns—specifically, whether or not to use the collective singular: the singular verb form with a collective noun. The plural verb forms are often used in British English with the singular forms of these countable nouns (e.g., \"The team have finished the project.\"). Conversely, in the English language as a whole, singular verb forms can often be used with nouns ending in \"-s\" that were once considered plural (e.g., \"Physics is my favorite academic subject\"). This apparent \"number mismatch\" is a natural and logical feature of human language, and its mechanism is a subtle metonymic shift in the concepts underlying the words.",
"title": "Metonymic merging of grammatical number"
},
{
"paragraph_id": 13,
"text": "In British English, it is generally accepted that collective nouns can take either singular or plural verb forms depending on the context and the metonymic shift that it implies. For example, \"the team is in the dressing room\" (formal agreement) refers to the team as an ensemble, while \"the team are fighting among themselves\" (notional agreement) refers to the team as individuals. That is also the British English practice with names of countries and cities in sports contexts (e.g., \"Newcastle have won the competition.\").",
"title": "Metonymic merging of grammatical number"
},
{
"paragraph_id": 14,
"text": "In American English, collective nouns almost always take singular verb forms (formal agreement). In cases that a metonymic shift would be revealed nearby, the whole sentence should be recast to avoid the metonymy. (For example, \"The team are fighting among themselves\" may become \"the team members are fighting among themselves\" or simply \"The team is infighting.\") Collective proper nouns are usually taken as singular (\"Apple is expected to release a new phone this year\"), unless the plural is explicit in the proper noun itself, in which case it is taken as plural (\"The Green Bay Packers are scheduled to play the Minnesota Vikings this weekend\"). More explicit examples of collective proper nouns include \"General Motors is once again the world's largest producer of vehicles,\" and \"Texas Instruments is a large producer of electronics here,\" and \"British Airways is an airline company in Europe.\" Furthermore, \"American Telephone & Telegraph is a telecommunications company in North America.\" Such phrases might look plural, but they are not.",
"title": "Metonymic merging of grammatical number"
},
{
"paragraph_id": 15,
"text": "A good example of such a metonymic shift in the singular-to-plural direction (which exclusively takes place in British English) is the following sentence: \"The team have finished the project.\" In that sentence, the underlying thought is of the individual members of the team working together to finish the project. Their accomplishment is collective, and the emphasis is not on their individual identities, but they are still discrete individuals; the word choice \"team have\" manages to convey both their collective and discrete identities simultaneously. Collective nouns that have a singular form but take a plural verb form are called collective plurals. An example of such a metonymic shift in the plural-to-singular direction is the following sentence: \"Mathematics is my favorite academic subject.\" The word \"mathematics\" may have originally been plural in concept, referring to mathematic endeavors, but metonymic shift (the shift in concept from \"the endeavors\" to \"the whole set of endeavors\") produced the usage of \"mathematics\" as a singular entity taking singular verb forms. (A true mass-noun sense of \"mathematics\" followed naturally.)",
"title": "Metonymic merging of grammatical number"
},
{
"paragraph_id": 16,
"text": "Nominally singular pronouns can be collective nouns taking plural verb forms, according to the same rules that apply to other collective nouns. For example, it is correct usage in both British English and American English usage to say: \"None are so fallible as those who are sure they're right.\" In that case, the plural verb is used because the context for \"none\" suggests more than one thing or person. This also applies to the use of an adjective as a collective noun: \"The British are coming!\"; \"The poor will always be with you.\"",
"title": "Metonymic merging of grammatical number"
},
{
"paragraph_id": 17,
"text": "Other examples include:",
"title": "Metonymic merging of grammatical number"
},
{
"paragraph_id": 18,
"text": "This does not, however, affect the tense later in the sentence:",
"title": "Metonymic merging of grammatical number"
},
{
"paragraph_id": 19,
"text": "Abbreviations provide other \"exceptions\" in American usage concerning plurals:",
"title": "Metonymic merging of grammatical number"
},
{
"paragraph_id": 20,
"text": "When only the name is plural but not the object, place, or person:",
"title": "Metonymic merging of grammatical number"
},
{
"paragraph_id": 21,
"text": "The tradition of using \"terms of venery\" or \"nouns of assembly\", collective nouns that are specific to certain kinds of animals, stems from an English hunting tradition of the Late Middle Ages. The fashion of a consciously developed hunting language came to England from France. It was marked by an extensive proliferation of specialist vocabulary, applying different names to the same feature in different animals. The elements can be shown to have already been part of French and English hunting terminology by the beginning of the 14th century. In the course of the 14th century, it became a courtly fashion to extend the vocabulary, and by the 15th century, the tendency had reached exaggerated and even satirical proportions.",
"title": "Terms of venery"
},
{
"paragraph_id": 22,
"text": "The Treatise, written by Walter of Bibbesworth in the mid-1200s, is the earliest source for collective nouns of animals in any European vernacular (and also the earliest source for animal noises). The Venerie of Twiti (early 14th century) distinguished three types of droppings of animals, and three different terms for herds of animals. Gaston Phoebus (14th century) had five terms for droppings of animals, which were extended to seven in the Master of the Game (early 15th century). The focus on collective terms for groups of animals emerged in the later 15th century. Thus, a list of collective nouns in Egerton MS 1995, dated to c. 1452 under the heading of \"termis of venery &c.\", extends to 70 items, and the list in the Book of Saint Albans (1486) runs to 164 items, many of which, even though introduced by \"the compaynys of beestys and fowlys\", relate not to venery but to human groups and professions and are clearly humorous, such as \"a Doctryne of doctoris\", \"a Sentence of Juges\", \"a Fightyng of beggers\", \"an uncredibilite of Cocoldis\", \"a Melody of harpers\", \"a Gagle of women\", \"a Disworship of Scottis\", etc.",
"title": "Terms of venery"
},
{
"paragraph_id": 23,
"text": "The Book of Saint Albans became very popular during the 16th century and was reprinted frequently. Gervase Markham edited and commented on the list in his The Gentleman's Academie, in 1595. The book's popularity had the effect of perpetuating many of these terms as part of the Standard English lexicon even if they were originally meant to be humorous and have long ceased to have any practical application.",
"title": "Terms of venery"
},
{
"paragraph_id": 24,
"text": "Even in their original context of medieval venery, the terms were of the nature of kennings, intended as a mark of erudition of the gentlemen able to use them correctly rather than for practical communication. The popularity of the terms in the modern period has resulted in the addition of numerous lighthearted, humorous or facetious collective nouns.",
"title": "Terms of venery"
}
] | In linguistics, a collective noun is a word referring to a collection of things taken as a whole. Most collective nouns in everyday speech are not specific to one kind of thing. For example, the collective noun "group" can be applied to people, or dogs, or objects. Some collective nouns are specific to one kind of thing, especially terms of venery, which identify groups of specific animals. For example, "pride" as a term of venery always refers to lions, never to dogs or cows. Other examples come from popular culture such as a group of owls, which is called a "parliament". Different forms of English handle verb agreement with collective count nouns differently. For example, users of British English generally accept that collective nouns take either singular or plural verb forms depending on context and the metonymic shift that it implies. | 2001-11-16T20:43:40Z | 2023-11-20T06:09:54Z | [
"Template:Cite book",
"Template:Blockquote",
"Template:Grammatical categories",
"Template:Linktext",
"Template:Main",
"Template:Authority control",
"Template:Anchor",
"Template:ISBN",
"Template:Cite news",
"Template:Lang",
"Template:Abbr",
"Template:More citations needed section",
"Template:Circa",
"Template:Reflist",
"Template:Short description",
"Template:Distinguish",
"Template:Citation-needed",
"Template:Wiktionary",
"Template:Lexical categories",
"Template:Further"
] | https://en.wikipedia.org/wiki/Collective_noun |
7,158 | Carat (mass) | The carat (ct) is a unit of mass equal to 200 mg (0.00705 oz; 0.00643 ozt), which is used for measuring gemstones and pearls. The current definition, sometimes known as the metric carat, was adopted in 1907 at the Fourth General Conference on Weights and Measures, and soon afterwards in many countries around the world. The carat is divisible into 100 points of 2 mg. Other subdivisions, and slightly different mass values, have been used in the past in different locations.
In terms of diamonds, a paragon is a flawless stone of at least 100 carats (20 g).
The ANSI X.12 EDI standard abbreviation for the carat is CD.
First attested in English in the mid-15th century, the word carat comes from Italian carato, which comes from Arabic (qīrāṭ; قيراط), in turn borrowed from Greek kerátion κεράτιον 'carob seed', a diminutive of keras 'horn'. It was a unit of weight, equal to 1/1728 (1/12) of a pound (see Mina (unit)).
Carob seeds have been used throughout history to measure jewelry, because it was believed that there was little variance in their mass distribution. However, this was a factual inaccuracy, as their mass varies about as much as seeds of other species.
In the past, each country had its own carat. It was often used for weighing gold. Beginning in the 1570s, it was used to measure weights of diamonds.
An 'international carat' of 205 milligrams was proposed in 1871 by the Syndical Chamber of Jewellers, etc., in Paris, and accepted in 1877 by the Syndical Chamber of Diamond Merchants in Paris. A metric carat of 200 milligrams – exactly one-fifth of a gram – had often been suggested in various countries, and was finally proposed by the International Committee of Weights and Measures, and unanimously accepted at the fourth sexennial General Conference of the Metric Convention held in Paris in October 1907. It was soon made compulsory by law in France, but uptake of the new carat was slower in England, where its use was allowed by the Weights and Measures (Metric System) Act of 1897.
In the United Kingdom the original Board of Trade carat was exactly 3+1647⁄9691 grains (~3.170 grains = ~205 mg); in 1888, the Board of Trade carat was changed to exactly 3+17⁄101 grains (~3.168 grains = ~205 mg). Despite it being a non-metric unit, a number of metric countries have used this unit for its limited range of application.
The Board of Trade carat was divisible into four diamond grains, but measurements were typically made in multiples of +1⁄64 carat.
There were also two varieties of refiners' carats once used in the United Kingdom—the pound carat and the ounce carat. The pound troy was divisible into 24 pound carats of 240 grains troy each; the pound carat was divisible into four pound grains of 60 grains troy each; and the pound grain was divisible into four pound quarters of 15 grains troy each. Likewise, the ounce troy was divisible into 24 ounce carats of 20 grains troy each; the ounce carat was divisible into four ounce grains of 5 grains troy each; and the ounce grain was divisible into four ounce quarters of 1+1⁄4 grains troy each.
The solidus was also a Roman weight unit. There is literary evidence that the weight of 72 coins of the type called solidus was exactly 1 Roman pound, and that the weight of 1 solidus was 24 siliquae. The weight of a Roman pound is generally believed to have been 327.45 g or possibly up to 5 g less. Therefore, the metric equivalent of 1 siliqua was approximately 189 mg. The Greeks had a similar unit of the same value.
Gold fineness in carats comes from carats and grains of gold in a solidus of coin. The conversion rates 1 solidus = 24 carats, 1 carat = 4 grains still stand. Woolhouse's Measures, Weights and Moneys of All Nations gives gold fineness in carats of 4 grains, and silver in troy pounds of 12 troy ounces of 20 pennyweight each. | [
{
"paragraph_id": 0,
"text": "The carat (ct) is a unit of mass equal to 200 mg (0.00705 oz; 0.00643 ozt), which is used for measuring gemstones and pearls. The current definition, sometimes known as the metric carat, was adopted in 1907 at the Fourth General Conference on Weights and Measures, and soon afterwards in many countries around the world. The carat is divisible into 100 points of 2 mg. Other subdivisions, and slightly different mass values, have been used in the past in different locations.",
"title": ""
},
{
"paragraph_id": 1,
"text": "In terms of diamonds, a paragon is a flawless stone of at least 100 carats (20 g).",
"title": ""
},
{
"paragraph_id": 2,
"text": "The ANSI X.12 EDI standard abbreviation for the carat is CD.",
"title": ""
},
{
"paragraph_id": 3,
"text": "First attested in English in the mid-15th century, the word carat comes from Italian carato, which comes from Arabic (qīrāṭ; قيراط), in turn borrowed from Greek kerátion κεράτιον 'carob seed', a diminutive of keras 'horn'. It was a unit of weight, equal to 1/1728 (1/12) of a pound (see Mina (unit)).",
"title": "Etymology"
},
{
"paragraph_id": 4,
"text": "Carob seeds have been used throughout history to measure jewelry, because it was believed that there was little variance in their mass distribution. However, this was a factual inaccuracy, as their mass varies about as much as seeds of other species.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "In the past, each country had its own carat. It was often used for weighing gold. Beginning in the 1570s, it was used to measure weights of diamonds.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "An 'international carat' of 205 milligrams was proposed in 1871 by the Syndical Chamber of Jewellers, etc., in Paris, and accepted in 1877 by the Syndical Chamber of Diamond Merchants in Paris. A metric carat of 200 milligrams – exactly one-fifth of a gram – had often been suggested in various countries, and was finally proposed by the International Committee of Weights and Measures, and unanimously accepted at the fourth sexennial General Conference of the Metric Convention held in Paris in October 1907. It was soon made compulsory by law in France, but uptake of the new carat was slower in England, where its use was allowed by the Weights and Measures (Metric System) Act of 1897.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "In the United Kingdom the original Board of Trade carat was exactly 3+1647⁄9691 grains (~3.170 grains = ~205 mg); in 1888, the Board of Trade carat was changed to exactly 3+17⁄101 grains (~3.168 grains = ~205 mg). Despite it being a non-metric unit, a number of metric countries have used this unit for its limited range of application.",
"title": "Historical definitions"
},
{
"paragraph_id": 8,
"text": "The Board of Trade carat was divisible into four diamond grains, but measurements were typically made in multiples of +1⁄64 carat.",
"title": "Historical definitions"
},
{
"paragraph_id": 9,
"text": "There were also two varieties of refiners' carats once used in the United Kingdom—the pound carat and the ounce carat. The pound troy was divisible into 24 pound carats of 240 grains troy each; the pound carat was divisible into four pound grains of 60 grains troy each; and the pound grain was divisible into four pound quarters of 15 grains troy each. Likewise, the ounce troy was divisible into 24 ounce carats of 20 grains troy each; the ounce carat was divisible into four ounce grains of 5 grains troy each; and the ounce grain was divisible into four ounce quarters of 1+1⁄4 grains troy each.",
"title": "Historical definitions"
},
{
"paragraph_id": 10,
"text": "The solidus was also a Roman weight unit. There is literary evidence that the weight of 72 coins of the type called solidus was exactly 1 Roman pound, and that the weight of 1 solidus was 24 siliquae. The weight of a Roman pound is generally believed to have been 327.45 g or possibly up to 5 g less. Therefore, the metric equivalent of 1 siliqua was approximately 189 mg. The Greeks had a similar unit of the same value.",
"title": "Historical definitions"
},
{
"paragraph_id": 11,
"text": "Gold fineness in carats comes from carats and grains of gold in a solidus of coin. The conversion rates 1 solidus = 24 carats, 1 carat = 4 grains still stand. Woolhouse's Measures, Weights and Moneys of All Nations gives gold fineness in carats of 4 grains, and silver in troy pounds of 12 troy ounces of 20 pennyweight each.",
"title": "Historical definitions"
}
] | The carat (ct) is a unit of mass equal to 200 mg, which is used for measuring gemstones and pearls.
The current definition, sometimes known as the metric carat, was adopted in 1907 at the Fourth General Conference on Weights and Measures, and soon afterwards in many countries around the world. The carat is divisible into 100 points of 2 mg. Other subdivisions, and slightly different mass values, have been used in the past in different locations. In terms of diamonds, a paragon is a flawless stone of at least 100 carats (20 g). The ANSI X.12 EDI standard abbreviation for the carat is CD. | 2001-11-18T02:51:55Z | 2023-12-19T21:34:15Z | [
"Template:Notelist-lr",
"Template:Reflist",
"Template:Cite web",
"Template:Cite book",
"Template:Jewellery",
"Template:OEtymD",
"Template:Short description",
"Template:Infobox unit",
"Template:Cvt",
"Template:Cite encyclopedia",
"Template:Cite journal",
"Template:Clarify",
"Template:Source-attribution",
"Template:Authority control",
"Template:About",
"Template:Diamond",
"Template:Efn-lr",
"Template:Sup",
"Template:Frac"
] | https://en.wikipedia.org/wiki/Carat_(mass) |
7,160 | European Conference of Postal and Telecommunications Administrations | The European Conference of Postal and Telecommunications Administrations (CEPT) was established on June 26, 1959, by nineteen European states in Montreux, Switzerland, as a coordinating body for European state telecommunications and postal organizations. The acronym comes from the French version of its name Conférence européenne des administrations des postes et des télécommunications.
CEPT was responsible for the creation of the European Telecommunications Standards Institute (ETSI) in 1988.
CEPT is organised into three main components:
As of March 2022: 46 countries.
Albania, Andorra, Austria, Azerbaijan, Belgium, Bosnia and Herzegovina, Bulgaria, Croatia, Cyprus, Czech Republic, Denmark, Estonia, Finland, France, Georgia, Germany, Greece, Hungary, Iceland, Ireland, Italy, Latvia, Liechtenstein, Lithuania, Luxembourg, Malta, Moldova, Monaco, Montenegro, Netherlands, North Macedonia, Norway, Poland, Portugal, Romania, San Marino, Serbia, Slovak Republic, Slovenia, Spain, Sweden, Switzerland, Turkey, Ukraine, United Kingdom, Vatican City. The Russian Federation and Belarus memberships were suspended indefinitely on March 17, 2022. | [
{
"paragraph_id": 0,
"text": "The European Conference of Postal and Telecommunications Administrations (CEPT) was established on June 26, 1959, by nineteen European states in Montreux, Switzerland, as a coordinating body for European state telecommunications and postal organizations. The acronym comes from the French version of its name Conférence européenne des administrations des postes et des télécommunications.",
"title": ""
},
{
"paragraph_id": 1,
"text": "CEPT was responsible for the creation of the European Telecommunications Standards Institute (ETSI) in 1988.",
"title": ""
},
{
"paragraph_id": 2,
"text": "CEPT is organised into three main components:",
"title": "Organization"
},
{
"paragraph_id": 3,
"text": "As of March 2022: 46 countries.",
"title": "Member countries"
},
{
"paragraph_id": 4,
"text": "Albania, Andorra, Austria, Azerbaijan, Belgium, Bosnia and Herzegovina, Bulgaria, Croatia, Cyprus, Czech Republic, Denmark, Estonia, Finland, France, Georgia, Germany, Greece, Hungary, Iceland, Ireland, Italy, Latvia, Liechtenstein, Lithuania, Luxembourg, Malta, Moldova, Monaco, Montenegro, Netherlands, North Macedonia, Norway, Poland, Portugal, Romania, San Marino, Serbia, Slovak Republic, Slovenia, Spain, Sweden, Switzerland, Turkey, Ukraine, United Kingdom, Vatican City. The Russian Federation and Belarus memberships were suspended indefinitely on March 17, 2022.",
"title": "Member countries"
}
] | The European Conference of Postal and Telecommunications Administrations (CEPT) was established on June 26, 1959, by nineteen European states in Montreux, Switzerland, as a coordinating body for European state telecommunications and postal organizations. The acronym comes from the French version of its name Conférence européenne des administrations des postes et des télécommunications. CEPT was responsible for the creation of the European Telecommunications Standards Institute (ETSI) in 1988. | 2001-11-18T16:23:45Z | 2023-09-04T10:26:12Z | [
"Template:Infobox organization",
"Template:Reflist",
"Template:Official website",
"Template:Short description",
"Template:Third-party",
"Template:Use British English",
"Template:Authority control",
"Template:Use dmy dates",
"Template:Cite web",
"Template:Telecommunications"
] | https://en.wikipedia.org/wiki/European_Conference_of_Postal_and_Telecommunications_Administrations |
7,162 | Tramlink | London Trams, previously Tramlink and Croydon Tramlink, is a light rail tram system serving Croydon and surrounding areas in South London, England. It began operation in 2000, the first tram system in the London region since 1952. It is managed by London Trams, a public body part of Transport for London (TfL), and has been operated by FirstGroup since 2017. Tramlink is one of two light rail networks in Greater London, the other being the Docklands Light Railway.
The network consists of 39 stops along 28 km (17 mi) of track, on a mixture of street track shared with other traffic, dedicated track in public roads, and off-street track consisting of new rights-of-way, former railway lines, and one right-of-way where the Tramlink track runs parallel to a third rail-electrified Network Rail line.
The network's lines coincide in central Croydon, with eastern termini at Beckenham Junction, Elmers End and New Addington, and a western terminus at Wimbledon, where there is an interchange for London Underground. Tramlink is the fourth-busiest light rail network in the UK behind the Docklands Light Railway, Manchester Metrolink and Tyne and Wear Metro.
In the first half of the 20th century, Croydon had many tramlines. The first to close was the Addiscombe – East Croydon station route through George Street to Cherry Orchard Road in 1927 and the last was the Purley - Embankment and Croydon (Coombe Road) - Thornton Heath routes closed April 1951. However, in the Spring of 1950, the Highways Committee were presented by the Mayor with the concept of running trams between East Croydon station and the new estate being constructed at New Addington. This was based on the fact that the Feltham cars used in Croydon were going to Leeds to serve their new estates on reserved tracks. In 1962, a private study with assistance from BR engineers, showed how easy it was to convert the West Croydon - Wimbledon train service to tram operation and successfully prevent conflict between trams and trains.
These two concepts became joined in joint LRTL/TLRS concept of New Addington to Wimbledon every 15 minutes via East and West Croydon and Mitcham plus New Addington to Tattenham Corner every 15 minutes via East and West Croydon, Sutton and Epsom Downs. A branch into Forestdale to give an overlap service from Sutton was also included. During the 1970s, several BR directors and up-and-coming managers were aware of the advantages. Chris Green, upon becoming managing director, Network South East, published his plans in 1987 expanding the concept to take in the Tattenham Corner and Caterham branches and provide a service from Croydon to Lewisham via Addiscombe and Hayes. Following on from the opening of the DLR a small group working under Tony Ridley, then managing director, London Transport, investigated the potential for further light rail in London. The report 'Light Rail for London', written by engineer David Catling and Transport Planner Jon Willis, looked at a number of possible schemes including conversion of the East London Line. However a light rail network focussed on Croydon, with the conversion of existing heavy rail routes, was the most promising. The London Borough of Croydon wanted to improve access to the town centre without further road building and also improve access to the LCC built New Addington estate. The project was developed by a small team in LT, headed by Scott McIntosh and in Croydon by Jill Lucas.
The scheme was accepted in principle in February 1990 by Croydon Council who worked with what was then London Regional Transport (LRT) to propose Tramlink to Parliament. The Croydon Tramlink Act 1994 resulted, which gave LRT the power to build and run Tramlink.
Part of its track is the original route of the Surrey Iron Railway that opened in 1803.
In 1995 four consortia were shortlisted to build, operate and maintain Tramlink:
In 1996 Tramtrack Croydon (TC) won a 99-year Private Finance Initiative (PFI) contract to design, build, operate and maintain Tramlink. The equity partners in TC were Amey (50%), Royal Bank of Scotland (20%), 3i (20%) and Sir Robert McAlpine with Bombardier Transportation contracted to build and maintain the trams and FirstGroup operate the service. TC retained the revenue generated by Tramlink and LRT had to pay compensation to TC for any changes to the fares and ticketing policy introduced later.
Construction work started in January 1997, with an expected opening in November 1999. The first tram was delivered in October 1998 to the new Depot at Therapia Lane and testing on the sections of the Wimbledon line began shortly afterwards.
The official opening of Tramlink took place on 10 May 2000 when route 3 from Croydon to New Addington opened to the public. Route 2 from Croydon to Beckenham Junction followed on 23 May 2000, and route 1 from Elmers End to Wimbledon opened a week later on 30 May 2000.
In March 2008, TfL announced that it had reached agreement to buy TC for £98 million. The purchase was finalised on 28 June 2008. The background to this purchase relates to the requirement that TfL (who took over from London Regional Transport in 2000) compensates TC for the consequences of any changes to the fares and ticketing policy introduced since 1996. In 2007 that payment was £4m, with an annual increase in rate. FirstGroup continues to operate the service.
In October 2008 TfL introduced a new livery, using the blue, white and green of the routes on TfL maps, to distinguish the trams from buses operating in the area. The colour of the cars was changed to green, and the brand name was changed from Croydon Tramlink to simply Tramlink. These refurbishments were completed in early 2009.
Centrale tram stop, in Tamworth Road on the one-way central loop, opened on 10 December 2005, increasing journey times slightly. As turnround times were already quite tight, this raised the issue of buying an extra tram to maintain punctuality. Partly for this reason but also to take into account the planned restructuring of services (subsequently introduced in July 2006), TfL issued tenders for a new tram. However, nothing resulted from this.
In January 2011, Tramtrack Croydon opened a tender for the supply of 10 new or second-hand trams from the end of summer 2011, for use between Therapia Lane and Elmers End. On 18 August 2011, TfL announced that Stadler Rail had won a $19.75 million contract to supply six Variobahn trams similar to those used by Bybanen in Bergen, Norway. They entered service in 2012. In August 2013, TfL ordered an additional four Variobahns for delivery in 2015, for use on the Wimbledon to Croydon link, an order later increased to six. This brought the total Variobahn fleet up to ten in 2015, and twelve in 2016 when the final two trams were delivered.
There are 39 stops, with 38 opened in the initial phase, and Centrale tram stop added on 10 December 2005. Most stops are 32.2 m (105 ft 8 in) long. They are virtually level with the doors and are all wider than 2 m (6 ft 7 in). This allows wheelchairs, prams, pushchairs and the elderly to board the tram easily with no steps. In street sections, the stop is integrated with the pavement. The tram stops have low platforms, 35 cm (14 in) above rail level. They are unstaffed and had automated ticket machines that are no longer in use due to TfL making trams cashless. In general, access between the platforms involves crossing the tracks by pedestrian level crossing.
Tramlink uses some former main-line stations on the Wimbledon–West Croydon and Elmers End–Coombe Lane stretches of line. The railway platforms have been demolished and rebuilt to Tramlink specifications, except at Elmers End and Wimbledon where the track level was raised to meet the higher main-line platforms to enable cross-platform interchange.
All stops have disabled access, raised paving, CCTV, a Passenger Help Point, a Passenger Information Display (PID), litter bins, a ticket machine, a noticeboard and lamp-posts, and most also have seats and a shelter.
The PIDs display the destinations and expected arrival times of the next two trams. They can also display any message the controllers want to display, such as information on delays or even safety instructions for vandals to stop putting rubbish or other objects onto the track.
Tramlink has been shown on the principal tube map since 1 June 2016, having previously appeared only on the "London Connections" map.
When Tramlink first opened it had three routes: Line 1 (yellow) from Wimbledon to Elmers End, Line 2 (red) from Croydon to Beckenham Junction, and Line 3 (green) from Croydon to New Addington. On 23 July 2006 the network was restructured, with Route 1 from Elmers End to Croydon, Route 2 from Beckenham Junction to Croydon and Route 3 from New Addington to Wimbledon. On 25 June 2012 Route 4 from Therapia Lane to Elmers End was introduced. On Monday 4 April 2016, Route 4 was extended from Therapia Lane to Wimbledon.
On 25 February 2018, the network and timetables were restructured again for more even and reliable services. As part of this change, trams would no longer display route numbers on their dot matrix destination screens. This resulted in three routes:
Additionally, the first two trams from New Addington will run to Wimbledon. Overall, this would result in a decrease in 2tph leaving Elmers End, resulting in a 25% decrease in capacity here, and 14% in the Addiscombe area. However, this would also regulate waiting times in this area and on the Wimbledon branch to every 5 minutes, from every 2–7 minutes.
Tramlink makes use of a number of National Rail lines, running parallel to franchised services, or in some cases, runs on previously abandoned railway corridors. Between Birkbeck and Beckenham Junction, Tramlink uses the Crystal Palace line, running on a single track alongside the track carrying Southern rail services. The National Rail track had been singled some years earlier.
From Elmers End to Woodside, Tramlink follows the former Addiscombe Line. At Woodside, the old station buildings stand disused, and the original platforms have been replaced by accessible low platforms. Tramlink then follows the former Woodside and South Croydon Railway (W&SCR) to reach the current Addiscombe tram stop, adjacent to the site of the demolished Bingham Road railway station. It continues along the former railway route to near Sandilands, where Tramlink curves sharply towards Sandilands tram stop. Another route from Sandilands tram stop curves sharply on to the W&SCR before passing through Park Hill (or Sandilands) tunnels and to the site of Coombe Road station after which it curves away across Lloyd Park.
Between Wimbledon station and Wandle Park, Tramlink follows the former West Croydon to Wimbledon Line, which was first opened in 1855 and closed on 31 May 1997 to allow for conversion into Tramlink. Within this section, from near Phipps Bridge to near Reeves Corner, Tramlink follows the Surrey Iron Railway, giving Tramlink a claim to one of the world's oldest railway alignments. Beyond Wandle Park, a Victorian footbridge beside Waddon New Road was dismantled to make way for the flyover over the West Croydon to Sutton railway line. The footbridge has been re-erected at Corfe Castle station on the Swanage Railway (although some evidence suggests that this was a similar footbridge removed from the site of Merton Park railway station).
Bus routes T31, T32 and T33 used to connect with Tramlink at the New Addington, Fieldway and Addington Village stops. T31 and T32 no longer run, and T33 has been renumbered as 433.
The onboard announcements are by BBC News reader (and tram enthusiast) Nicholas Owen. The announcement pattern is as follows: e.g. This tram is for Wimbledon; the next stop will be Merton Park.
Tramlink currently uses 35 trams. In summary:
The original fleet comprised 24 articulated low floor Bombardier Flexity Swift CR4000 trams built in Vienna numbered beginning at 2530, continuing from the highest-numbered tram 2529 on London's former tram network, which closed in 1952. The original livery was red and white. One (2550) was painted in FirstGroup white, blue and pink livery. In 2006, the CR4000 fleet was refreshed, with the bus-style destination roller blinds being replaced with a digital dot-matrix display. In 2008/09 the fleet was repainted externally in the new green livery and the interiors were refurbished with new flooring, seat covers retrimmed in a new moquette and stanchions repainted from yellow to green. One (2551) has not returned to service after the fatal accident on 9 November 2016.
In 2007, tram 2535 was named after Steven Parascandolo, a well known tram enthusiast.
In January 2011, Tramtrack Croydon invited tenders for the supply of then new or second-hand trams, and on 18 August 2011, TfL announced that Stadler Rail had won a $19.75 million contract to supply six Variobahn trams similar to those used by Bybanen in Bergen, Norway. They entered service in 2012. In August 2013, TfL ordered an additional four Variobahn trams for delivery in 2015, an order which was later increased to six. This brought the total Variobahn fleet up to ten in 2015, and 12 in 2016 when the final two trams were delivered.
Engineers' vehicles used in Tramlink construction were hired for that purpose.
In November 2006 Tramlink purchased five second-hand engineering vehicles from Deutsche Bahn. These were two DB class Klv 53 [de] engineers' trams (numbered 058 and 059 in Tramlink service), and three 4-wheel wagons (numbered 060, 061, and 062). Service tram 058 and trailer 061 were both sold to the National Tramway Museum in 2010.
TfL Bus & Tram Passes are valid on Tramlink, as are Travelcards that include any of zones 3, 4, 5 and 6.
Pay-as-you-go Oyster Card fares are the same as on London Buses, although special fares may apply when using Tramlink feeder buses.
When using Oyster cards, passengers must touch in on the platform before boarding the tram. Special arrangements apply at Wimbledon station, where the Tramlink stop is within the National Rail and London Underground station. Tramlink passengers must therefore touch in at the station entry barriers then again at the Tramlink platform to inform the system that no mainline/LUL rail journey has been made.
EMV contactless payment cards can also be used to pay for fares in the same manner as Oyster cards. Ticket machines were withdrawn on 16 July 2018.
The service was created as a result of the Croydon Tramlink Act 1994 that received Royal Assent on 21 July 1994, a Private Bill jointly promoted by London Regional Transport (the predecessor of Transport for London (TfL)) and Croydon London Borough Council. Following a competitive tender, a consortium company Tramtrack Croydon Limited (incorporated in 1995) was awarded a 99-year concession to build and run the system. Since 28 June 2008, the company has been a subsidiary of TfL.
Tramlink is currently operated by Tram Operations Ltd (TOL), a subsidiary of FirstGroup, who have a contract to operate the service until 2030. TOL provides the drivers and management to operate the service; the infrastructure and trams are owned and maintained by a TfL subsidiary.
The key available trends in recent years for Tramlink are (years ending 31 March):
Activities in the financial year 2020/21 were severely reduced by the impact of the coronavirus pandemic.
Detailed passenger journeys since Tramlink commenced operations in May 2000 were:
As of 2020, the only extension actively being pursued by the Mayor of London and TfL is a new line to Sutton from Wimbledon or Colliers Wood, known as the Sutton Link.
In July 2013, then Mayor Boris Johnson had affirmed that there was a reasonable business case for Tramlink to cover the Wimbledon – Sutton corridor, which might also include a loop via St Helier Hospital and an extension to The Royal Marsden Hospital. In 2014, a proposed £320m scheme for a new line to connect Wimbledon to Sutton via Morden was made and brought to consultation jointly by the London Boroughs of Merton and Sutton. Although £100m from TfL was initially secured in the draft 2016/17 budget, this was subsequently reallocated.
In 2018, TfL opened a consultation on proposals for a connection to Sutton, with three route options: from South Wimbledon, from Colliers Wood (both having an option of a bus rapid transit route or a tram line) or from Wimbledon (only as a tram line). In February 2020, following the consultation, TfL announced their preference for a north–south tramway between Colliers Wood and Sutton town centre, with a projected cost of £425m, on the condition of securing additional funding. Work on the project stopped in July 2020, as Transport for London could not find sufficient funding for it to continue.
Numerous extensions to the network have been discussed or proposed over the years, involving varying degrees of support and investigative effort.
In 2002, as part of The Mayor's Transport Strategy for London, a number of proposed extensions were identified, including to Sutton from Wimbledon or Mitcham; to Crystal Palace; to Colliers Wood/Tooting; and along the A23. The Strategy said that "extensions to the network could, in principle, be developed at relatively modest cost where there is potential demand..." and sought initial views on the viability of a number of extensions by summer 2002.
In 2006, in a TfL consultation on an extension to Crystal Palace, three options were presented: on-street, off-street and a mixture of the two. After the consultation, the off-street option was favoured, to include Crystal Palace Station and Crystal Palace Parade. TfL stated in 2008 that due to lack of funding the plans for this extension would not be taken forward. They were revived shortly after Boris Johnson's re-election as Mayor in May 2012, but six months later they were cancelled again.
In November 2014, a 15-year plan, Trams 2030, called for upgrades to increase capacity on the network in line with an expected increase in ridership to 60 million passengers by 2031 (although the passenger numbers at the time (2013/14: 31.2 million) have not been exceeded since (as at 2019)). The upgrades were to improve reliability, support regeneration in the Croydon metropolitan centre, and future-proof the network for Crossrail 2, a potential Bakerloo line extension, and extensions to the tram network itself to a wide variety of destinations. The plans involve dual-tracking across the network and introducing diverting loops on either side of Croydon, allowing for a higher frequency of trams on all four branches without increasing congestion in central Croydon. The £737m investment was to be funded by the Croydon Growth Zone, TfL Business Plan, housing levies, and the respective boroughs, and by the affected developers.
All the various developments, if implemented, could theoretically require an increase in the fleet from 30 to up to 80 trams (depending on whether longer trams or coupled trams are used). As such, an increase in depot and stabling capacity would also be required; enlargement of the current Therapia Lane site, as well as sites near the Elmers End and Harrington Road tram stops, were shortlisted. | [
{
"paragraph_id": 0,
"text": "London Trams, previously Tramlink and Croydon Tramlink, is a light rail tram system serving Croydon and surrounding areas in South London, England. It began operation in 2000, the first tram system in the London region since 1952. It is managed by London Trams, a public body part of Transport for London (TfL), and has been operated by FirstGroup since 2017. Tramlink is one of two light rail networks in Greater London, the other being the Docklands Light Railway.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The network consists of 39 stops along 28 km (17 mi) of track, on a mixture of street track shared with other traffic, dedicated track in public roads, and off-street track consisting of new rights-of-way, former railway lines, and one right-of-way where the Tramlink track runs parallel to a third rail-electrified Network Rail line.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The network's lines coincide in central Croydon, with eastern termini at Beckenham Junction, Elmers End and New Addington, and a western terminus at Wimbledon, where there is an interchange for London Underground. Tramlink is the fourth-busiest light rail network in the UK behind the Docklands Light Railway, Manchester Metrolink and Tyne and Wear Metro.",
"title": ""
},
{
"paragraph_id": 3,
"text": "In the first half of the 20th century, Croydon had many tramlines. The first to close was the Addiscombe – East Croydon station route through George Street to Cherry Orchard Road in 1927 and the last was the Purley - Embankment and Croydon (Coombe Road) - Thornton Heath routes closed April 1951. However, in the Spring of 1950, the Highways Committee were presented by the Mayor with the concept of running trams between East Croydon station and the new estate being constructed at New Addington. This was based on the fact that the Feltham cars used in Croydon were going to Leeds to serve their new estates on reserved tracks. In 1962, a private study with assistance from BR engineers, showed how easy it was to convert the West Croydon - Wimbledon train service to tram operation and successfully prevent conflict between trams and trains.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "These two concepts became joined in joint LRTL/TLRS concept of New Addington to Wimbledon every 15 minutes via East and West Croydon and Mitcham plus New Addington to Tattenham Corner every 15 minutes via East and West Croydon, Sutton and Epsom Downs. A branch into Forestdale to give an overlap service from Sutton was also included. During the 1970s, several BR directors and up-and-coming managers were aware of the advantages. Chris Green, upon becoming managing director, Network South East, published his plans in 1987 expanding the concept to take in the Tattenham Corner and Caterham branches and provide a service from Croydon to Lewisham via Addiscombe and Hayes. Following on from the opening of the DLR a small group working under Tony Ridley, then managing director, London Transport, investigated the potential for further light rail in London. The report 'Light Rail for London', written by engineer David Catling and Transport Planner Jon Willis, looked at a number of possible schemes including conversion of the East London Line. However a light rail network focussed on Croydon, with the conversion of existing heavy rail routes, was the most promising. The London Borough of Croydon wanted to improve access to the town centre without further road building and also improve access to the LCC built New Addington estate. The project was developed by a small team in LT, headed by Scott McIntosh and in Croydon by Jill Lucas.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "The scheme was accepted in principle in February 1990 by Croydon Council who worked with what was then London Regional Transport (LRT) to propose Tramlink to Parliament. The Croydon Tramlink Act 1994 resulted, which gave LRT the power to build and run Tramlink.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "Part of its track is the original route of the Surrey Iron Railway that opened in 1803.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "In 1995 four consortia were shortlisted to build, operate and maintain Tramlink:",
"title": "History"
},
{
"paragraph_id": 8,
"text": "In 1996 Tramtrack Croydon (TC) won a 99-year Private Finance Initiative (PFI) contract to design, build, operate and maintain Tramlink. The equity partners in TC were Amey (50%), Royal Bank of Scotland (20%), 3i (20%) and Sir Robert McAlpine with Bombardier Transportation contracted to build and maintain the trams and FirstGroup operate the service. TC retained the revenue generated by Tramlink and LRT had to pay compensation to TC for any changes to the fares and ticketing policy introduced later.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "Construction work started in January 1997, with an expected opening in November 1999. The first tram was delivered in October 1998 to the new Depot at Therapia Lane and testing on the sections of the Wimbledon line began shortly afterwards.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "The official opening of Tramlink took place on 10 May 2000 when route 3 from Croydon to New Addington opened to the public. Route 2 from Croydon to Beckenham Junction followed on 23 May 2000, and route 1 from Elmers End to Wimbledon opened a week later on 30 May 2000.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "In March 2008, TfL announced that it had reached agreement to buy TC for £98 million. The purchase was finalised on 28 June 2008. The background to this purchase relates to the requirement that TfL (who took over from London Regional Transport in 2000) compensates TC for the consequences of any changes to the fares and ticketing policy introduced since 1996. In 2007 that payment was £4m, with an annual increase in rate. FirstGroup continues to operate the service.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "In October 2008 TfL introduced a new livery, using the blue, white and green of the routes on TfL maps, to distinguish the trams from buses operating in the area. The colour of the cars was changed to green, and the brand name was changed from Croydon Tramlink to simply Tramlink. These refurbishments were completed in early 2009.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "Centrale tram stop, in Tamworth Road on the one-way central loop, opened on 10 December 2005, increasing journey times slightly. As turnround times were already quite tight, this raised the issue of buying an extra tram to maintain punctuality. Partly for this reason but also to take into account the planned restructuring of services (subsequently introduced in July 2006), TfL issued tenders for a new tram. However, nothing resulted from this.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "In January 2011, Tramtrack Croydon opened a tender for the supply of 10 new or second-hand trams from the end of summer 2011, for use between Therapia Lane and Elmers End. On 18 August 2011, TfL announced that Stadler Rail had won a $19.75 million contract to supply six Variobahn trams similar to those used by Bybanen in Bergen, Norway. They entered service in 2012. In August 2013, TfL ordered an additional four Variobahns for delivery in 2015, for use on the Wimbledon to Croydon link, an order later increased to six. This brought the total Variobahn fleet up to ten in 2015, and twelve in 2016 when the final two trams were delivered.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "There are 39 stops, with 38 opened in the initial phase, and Centrale tram stop added on 10 December 2005. Most stops are 32.2 m (105 ft 8 in) long. They are virtually level with the doors and are all wider than 2 m (6 ft 7 in). This allows wheelchairs, prams, pushchairs and the elderly to board the tram easily with no steps. In street sections, the stop is integrated with the pavement. The tram stops have low platforms, 35 cm (14 in) above rail level. They are unstaffed and had automated ticket machines that are no longer in use due to TfL making trams cashless. In general, access between the platforms involves crossing the tracks by pedestrian level crossing.",
"title": "Current network"
},
{
"paragraph_id": 16,
"text": "Tramlink uses some former main-line stations on the Wimbledon–West Croydon and Elmers End–Coombe Lane stretches of line. The railway platforms have been demolished and rebuilt to Tramlink specifications, except at Elmers End and Wimbledon where the track level was raised to meet the higher main-line platforms to enable cross-platform interchange.",
"title": "Current network"
},
{
"paragraph_id": 17,
"text": "All stops have disabled access, raised paving, CCTV, a Passenger Help Point, a Passenger Information Display (PID), litter bins, a ticket machine, a noticeboard and lamp-posts, and most also have seats and a shelter.",
"title": "Current network"
},
{
"paragraph_id": 18,
"text": "The PIDs display the destinations and expected arrival times of the next two trams. They can also display any message the controllers want to display, such as information on delays or even safety instructions for vandals to stop putting rubbish or other objects onto the track.",
"title": "Current network"
},
{
"paragraph_id": 19,
"text": "Tramlink has been shown on the principal tube map since 1 June 2016, having previously appeared only on the \"London Connections\" map.",
"title": "Current network"
},
{
"paragraph_id": 20,
"text": "When Tramlink first opened it had three routes: Line 1 (yellow) from Wimbledon to Elmers End, Line 2 (red) from Croydon to Beckenham Junction, and Line 3 (green) from Croydon to New Addington. On 23 July 2006 the network was restructured, with Route 1 from Elmers End to Croydon, Route 2 from Beckenham Junction to Croydon and Route 3 from New Addington to Wimbledon. On 25 June 2012 Route 4 from Therapia Lane to Elmers End was introduced. On Monday 4 April 2016, Route 4 was extended from Therapia Lane to Wimbledon.",
"title": "Current network"
},
{
"paragraph_id": 21,
"text": "On 25 February 2018, the network and timetables were restructured again for more even and reliable services. As part of this change, trams would no longer display route numbers on their dot matrix destination screens. This resulted in three routes:",
"title": "Current network"
},
{
"paragraph_id": 22,
"text": "Additionally, the first two trams from New Addington will run to Wimbledon. Overall, this would result in a decrease in 2tph leaving Elmers End, resulting in a 25% decrease in capacity here, and 14% in the Addiscombe area. However, this would also regulate waiting times in this area and on the Wimbledon branch to every 5 minutes, from every 2–7 minutes.",
"title": "Current network"
},
{
"paragraph_id": 23,
"text": "Tramlink makes use of a number of National Rail lines, running parallel to franchised services, or in some cases, runs on previously abandoned railway corridors. Between Birkbeck and Beckenham Junction, Tramlink uses the Crystal Palace line, running on a single track alongside the track carrying Southern rail services. The National Rail track had been singled some years earlier.",
"title": "Current network"
},
{
"paragraph_id": 24,
"text": "From Elmers End to Woodside, Tramlink follows the former Addiscombe Line. At Woodside, the old station buildings stand disused, and the original platforms have been replaced by accessible low platforms. Tramlink then follows the former Woodside and South Croydon Railway (W&SCR) to reach the current Addiscombe tram stop, adjacent to the site of the demolished Bingham Road railway station. It continues along the former railway route to near Sandilands, where Tramlink curves sharply towards Sandilands tram stop. Another route from Sandilands tram stop curves sharply on to the W&SCR before passing through Park Hill (or Sandilands) tunnels and to the site of Coombe Road station after which it curves away across Lloyd Park.",
"title": "Current network"
},
{
"paragraph_id": 25,
"text": "Between Wimbledon station and Wandle Park, Tramlink follows the former West Croydon to Wimbledon Line, which was first opened in 1855 and closed on 31 May 1997 to allow for conversion into Tramlink. Within this section, from near Phipps Bridge to near Reeves Corner, Tramlink follows the Surrey Iron Railway, giving Tramlink a claim to one of the world's oldest railway alignments. Beyond Wandle Park, a Victorian footbridge beside Waddon New Road was dismantled to make way for the flyover over the West Croydon to Sutton railway line. The footbridge has been re-erected at Corfe Castle station on the Swanage Railway (although some evidence suggests that this was a similar footbridge removed from the site of Merton Park railway station).",
"title": "Current network"
},
{
"paragraph_id": 26,
"text": "Bus routes T31, T32 and T33 used to connect with Tramlink at the New Addington, Fieldway and Addington Village stops. T31 and T32 no longer run, and T33 has been renumbered as 433.",
"title": "Current network"
},
{
"paragraph_id": 27,
"text": "The onboard announcements are by BBC News reader (and tram enthusiast) Nicholas Owen. The announcement pattern is as follows: e.g. This tram is for Wimbledon; the next stop will be Merton Park.",
"title": "Current network"
},
{
"paragraph_id": 28,
"text": "Tramlink currently uses 35 trams. In summary:",
"title": "Rolling stock"
},
{
"paragraph_id": 29,
"text": "The original fleet comprised 24 articulated low floor Bombardier Flexity Swift CR4000 trams built in Vienna numbered beginning at 2530, continuing from the highest-numbered tram 2529 on London's former tram network, which closed in 1952. The original livery was red and white. One (2550) was painted in FirstGroup white, blue and pink livery. In 2006, the CR4000 fleet was refreshed, with the bus-style destination roller blinds being replaced with a digital dot-matrix display. In 2008/09 the fleet was repainted externally in the new green livery and the interiors were refurbished with new flooring, seat covers retrimmed in a new moquette and stanchions repainted from yellow to green. One (2551) has not returned to service after the fatal accident on 9 November 2016.",
"title": "Rolling stock"
},
{
"paragraph_id": 30,
"text": "In 2007, tram 2535 was named after Steven Parascandolo, a well known tram enthusiast.",
"title": "Rolling stock"
},
{
"paragraph_id": 31,
"text": "In January 2011, Tramtrack Croydon invited tenders for the supply of then new or second-hand trams, and on 18 August 2011, TfL announced that Stadler Rail had won a $19.75 million contract to supply six Variobahn trams similar to those used by Bybanen in Bergen, Norway. They entered service in 2012. In August 2013, TfL ordered an additional four Variobahn trams for delivery in 2015, an order which was later increased to six. This brought the total Variobahn fleet up to ten in 2015, and 12 in 2016 when the final two trams were delivered.",
"title": "Rolling stock"
},
{
"paragraph_id": 32,
"text": "Engineers' vehicles used in Tramlink construction were hired for that purpose.",
"title": "Rolling stock"
},
{
"paragraph_id": 33,
"text": "In November 2006 Tramlink purchased five second-hand engineering vehicles from Deutsche Bahn. These were two DB class Klv 53 [de] engineers' trams (numbered 058 and 059 in Tramlink service), and three 4-wheel wagons (numbered 060, 061, and 062). Service tram 058 and trailer 061 were both sold to the National Tramway Museum in 2010.",
"title": "Rolling stock"
},
{
"paragraph_id": 34,
"text": "TfL Bus & Tram Passes are valid on Tramlink, as are Travelcards that include any of zones 3, 4, 5 and 6.",
"title": "Fares and ticketing"
},
{
"paragraph_id": 35,
"text": "Pay-as-you-go Oyster Card fares are the same as on London Buses, although special fares may apply when using Tramlink feeder buses.",
"title": "Fares and ticketing"
},
{
"paragraph_id": 36,
"text": "When using Oyster cards, passengers must touch in on the platform before boarding the tram. Special arrangements apply at Wimbledon station, where the Tramlink stop is within the National Rail and London Underground station. Tramlink passengers must therefore touch in at the station entry barriers then again at the Tramlink platform to inform the system that no mainline/LUL rail journey has been made.",
"title": "Fares and ticketing"
},
{
"paragraph_id": 37,
"text": "EMV contactless payment cards can also be used to pay for fares in the same manner as Oyster cards. Ticket machines were withdrawn on 16 July 2018.",
"title": "Fares and ticketing"
},
{
"paragraph_id": 38,
"text": "The service was created as a result of the Croydon Tramlink Act 1994 that received Royal Assent on 21 July 1994, a Private Bill jointly promoted by London Regional Transport (the predecessor of Transport for London (TfL)) and Croydon London Borough Council. Following a competitive tender, a consortium company Tramtrack Croydon Limited (incorporated in 1995) was awarded a 99-year concession to build and run the system. Since 28 June 2008, the company has been a subsidiary of TfL.",
"title": "Corporate affairs"
},
{
"paragraph_id": 39,
"text": "Tramlink is currently operated by Tram Operations Ltd (TOL), a subsidiary of FirstGroup, who have a contract to operate the service until 2030. TOL provides the drivers and management to operate the service; the infrastructure and trams are owned and maintained by a TfL subsidiary.",
"title": "Corporate affairs"
},
{
"paragraph_id": 40,
"text": "The key available trends in recent years for Tramlink are (years ending 31 March):",
"title": "Corporate affairs"
},
{
"paragraph_id": 41,
"text": "Activities in the financial year 2020/21 were severely reduced by the impact of the coronavirus pandemic.",
"title": "Corporate affairs"
},
{
"paragraph_id": 42,
"text": "Detailed passenger journeys since Tramlink commenced operations in May 2000 were:",
"title": "Corporate affairs"
},
{
"paragraph_id": 43,
"text": "As of 2020, the only extension actively being pursued by the Mayor of London and TfL is a new line to Sutton from Wimbledon or Colliers Wood, known as the Sutton Link.",
"title": "Future developments"
},
{
"paragraph_id": 44,
"text": "In July 2013, then Mayor Boris Johnson had affirmed that there was a reasonable business case for Tramlink to cover the Wimbledon – Sutton corridor, which might also include a loop via St Helier Hospital and an extension to The Royal Marsden Hospital. In 2014, a proposed £320m scheme for a new line to connect Wimbledon to Sutton via Morden was made and brought to consultation jointly by the London Boroughs of Merton and Sutton. Although £100m from TfL was initially secured in the draft 2016/17 budget, this was subsequently reallocated.",
"title": "Future developments"
},
{
"paragraph_id": 45,
"text": "In 2018, TfL opened a consultation on proposals for a connection to Sutton, with three route options: from South Wimbledon, from Colliers Wood (both having an option of a bus rapid transit route or a tram line) or from Wimbledon (only as a tram line). In February 2020, following the consultation, TfL announced their preference for a north–south tramway between Colliers Wood and Sutton town centre, with a projected cost of £425m, on the condition of securing additional funding. Work on the project stopped in July 2020, as Transport for London could not find sufficient funding for it to continue.",
"title": "Future developments"
},
{
"paragraph_id": 46,
"text": "Numerous extensions to the network have been discussed or proposed over the years, involving varying degrees of support and investigative effort.",
"title": "Future developments"
},
{
"paragraph_id": 47,
"text": "In 2002, as part of The Mayor's Transport Strategy for London, a number of proposed extensions were identified, including to Sutton from Wimbledon or Mitcham; to Crystal Palace; to Colliers Wood/Tooting; and along the A23. The Strategy said that \"extensions to the network could, in principle, be developed at relatively modest cost where there is potential demand...\" and sought initial views on the viability of a number of extensions by summer 2002.",
"title": "Future developments"
},
{
"paragraph_id": 48,
"text": "In 2006, in a TfL consultation on an extension to Crystal Palace, three options were presented: on-street, off-street and a mixture of the two. After the consultation, the off-street option was favoured, to include Crystal Palace Station and Crystal Palace Parade. TfL stated in 2008 that due to lack of funding the plans for this extension would not be taken forward. They were revived shortly after Boris Johnson's re-election as Mayor in May 2012, but six months later they were cancelled again.",
"title": "Future developments"
},
{
"paragraph_id": 49,
"text": "In November 2014, a 15-year plan, Trams 2030, called for upgrades to increase capacity on the network in line with an expected increase in ridership to 60 million passengers by 2031 (although the passenger numbers at the time (2013/14: 31.2 million) have not been exceeded since (as at 2019)). The upgrades were to improve reliability, support regeneration in the Croydon metropolitan centre, and future-proof the network for Crossrail 2, a potential Bakerloo line extension, and extensions to the tram network itself to a wide variety of destinations. The plans involve dual-tracking across the network and introducing diverting loops on either side of Croydon, allowing for a higher frequency of trams on all four branches without increasing congestion in central Croydon. The £737m investment was to be funded by the Croydon Growth Zone, TfL Business Plan, housing levies, and the respective boroughs, and by the affected developers.",
"title": "Future developments"
},
{
"paragraph_id": 50,
"text": "All the various developments, if implemented, could theoretically require an increase in the fleet from 30 to up to 80 trams (depending on whether longer trams or coupled trams are used). As such, an increase in depot and stabling capacity would also be required; enlargement of the current Therapia Lane site, as well as sites near the Elmers End and Harrington Road tram stops, were shortlisted.",
"title": "Future developments"
}
] | London Trams, previously Tramlink and Croydon Tramlink, is a light rail tram system serving Croydon and surrounding areas in South London, England. It began operation in 2000, the first tram system in the London region since 1952. It is managed by London Trams, a public body part of Transport for London (TfL), and has been operated by FirstGroup since 2017. Tramlink is one of two light rail networks in Greater London, the other being the Docklands Light Railway. The network consists of 39 stops along 28 km (17 mi) of track, on a mixture of street track shared with other traffic, dedicated track in public roads, and off-street track consisting of new rights-of-way, former railway lines, and one right-of-way where the Tramlink track runs parallel to a third rail-electrified Network Rail line. The network's lines coincide in central Croydon, with eastern termini at Beckenham Junction, Elmers End and New Addington, and a western terminus at Wimbledon, where there is an interchange for London Underground. Tramlink is the fourth-busiest light rail network in the UK behind the Docklands Light Railway, Manchester Metrolink and Tyne and Wear Metro. | 2001-11-18T20:08:03Z | 2023-12-26T00:30:50Z | [
"Template:Tramlink navbox",
"Template:Notelist",
"Template:Cite news",
"Template:Cite magazine",
"Template:Commons category",
"Template:Transport in London",
"Template:Britishmetros",
"Template:Col-end",
"Template:Nbsp",
"Template:Color",
"Template:Cite book",
"Template:Cite press release",
"Template:Official website",
"Template:UK light rail",
"Template:Short description",
"Template:Infobox Public transit",
"Template:Main",
"Template:Reflist",
"Template:Rail-interchange",
"Template:Portal",
"Template:Attached KML",
"Template:Use dmy dates",
"Template:Citation needed",
"Template:Col-begin",
"Template:Ill",
"Template:Efn",
"Template:Ndash",
"Template:Webarchive",
"Template:About",
"Template:Use British English",
"Template:Convert",
"Template:Col-break",
"Template:Cite web"
] | https://en.wikipedia.org/wiki/Tramlink |
7,163 | Catenary | In physics and geometry, a catenary (US: /ˈkætənɛri/ KAT-ən-err-ee, UK: /kəˈtiːnəri/ kə-TEE-nər-ee) is the curve that an idealized hanging chain or cable assumes under its own weight when supported only at its ends in a uniform gravitational field.
The catenary curve has a U-like shape, superficially similar in appearance to a parabola, which it is not.
The curve appears in the design of certain types of arches and as a cross section of the catenoid—the shape assumed by a soap film bounded by two parallel circular rings.
The catenary is also called the alysoid, chainette, or, particularly in the materials sciences, funicular. Rope statics describes catenaries in a classic statics problem involving a hanging rope.
Mathematically, the catenary curve is the graph of the hyperbolic cosine function. The surface of revolution of the catenary curve, the catenoid, is a minimal surface, specifically a minimal surface of revolution. A hanging chain will assume a shape of least potential energy which is a catenary. Galileo Galilei in 1638 discussed the catenary in the book Two New Sciences recognizing that it was different from a parabola. The mathematical properties of the catenary curve were studied by Robert Hooke in the 1670s, and its equation was derived by Leibniz, Huygens and Johann Bernoulli in 1691.
Catenaries and related curves are used in architecture and engineering (e.g., in the design of bridges and arches so that forces do not result in bending moments). In the offshore oil and gas industry, "catenary" refers to a steel catenary riser, a pipeline suspended between a production platform and the seabed that adopts an approximate catenary shape. In the rail industry it refers to the overhead wiring that transfers power to trains. (This often supports a contact wire, in which case it does not follow a true catenary curve.)
In optics and electromagnetics, the hyperbolic cosine and sine functions are basic solutions to Maxwell's equations. The symmetric modes consisting of two evanescent waves would form a catenary shape.
The word "catenary" is derived from the Latin word catēna, which means "chain". The English word "catenary" is usually attributed to Thomas Jefferson, who wrote in a letter to Thomas Paine on the construction of an arch for a bridge:
I have lately received from Italy a treatise on the equilibrium of arches, by the Abbé Mascheroni. It appears to be a very scientifical work. I have not yet had time to engage in it; but I find that the conclusions of his demonstrations are, that every part of the catenary is in perfect equilibrium.
It is often said that Galileo thought the curve of a hanging chain was parabolic. However, in his Two New Sciences (1638), Galileo wrote that a hanging cord is only an approximate parabola, correctly observing that this approximation improves in accuracy as the curvature gets smaller and is almost exact when the elevation is less than 45°. The fact that the curve followed by a chain is not a parabola was proven by Joachim Jungius (1587–1657); this result was published posthumously in 1669.
The application of the catenary to the construction of arches is attributed to Robert Hooke, whose "true mathematical and mechanical form" in the context of the rebuilding of St Paul's Cathedral alluded to a catenary. Some much older arches approximate catenaries, an example of which is the Arch of Taq-i Kisra in Ctesiphon.
In 1671, Hooke announced to the Royal Society that he had solved the problem of the optimal shape of an arch, and in 1675 published an encrypted solution as a Latin anagram in an appendix to his Description of Helioscopes, where he wrote that he had found "a true mathematical and mechanical form of all manner of Arches for Building." He did not publish the solution to this anagram in his lifetime, but in 1705 his executor provided it as ut pendet continuum flexile, sic stabit contiguum rigidum inversum, meaning "As hangs a flexible cable so, inverted, stand the touching pieces of an arch."
In 1691, Gottfried Leibniz, Christiaan Huygens, and Johann Bernoulli derived the equation in response to a challenge by Jakob Bernoulli; their solutions were published in the Acta Eruditorum for June 1691. David Gregory wrote a treatise on the catenary in 1697 in which he provided an incorrect derivation of the correct differential equation.
Euler proved in 1744 that the catenary is the curve which, when rotated about the x-axis, gives the surface of minimum surface area (the catenoid) for the given bounding circles. Nicolas Fuss gave equations describing the equilibrium of a chain under any force in 1796.
Catenary arches are often used in the construction of kilns. To create the desired curve, the shape of a hanging chain of the desired dimensions is transferred to a form which is then used as a guide for the placement of bricks or other building material.
The Gateway Arch in St. Louis, Missouri, United States is sometimes said to be an (inverted) catenary, but this is incorrect. It is close to a more general curve called a flattened catenary, with equation y = A cosh(Bx), which is a catenary if AB = 1. While a catenary is the ideal shape for a freestanding arch of constant thickness, the Gateway Arch is narrower near the top. According to the U.S. National Historic Landmark nomination for the arch, it is a "weighted catenary" instead. Its shape corresponds to the shape that a weighted chain, having lighter links in the middle, would form. The logo for McDonald's, the Golden Arches, while intended to be two joined parabolas, is also based on the catenary.
In free-hanging chains, the force exerted is uniform with respect to length of the chain, and so the chain follows the catenary curve. The same is true of a simple suspension bridge or "catenary bridge," where the roadway follows the cable.
A stressed ribbon bridge is a more sophisticated structure with the same catenary shape.
However, in a suspension bridge with a suspended roadway, the chains or cables support the weight of the bridge, and so do not hang freely. In most cases the roadway is flat, so when the weight of the cable is negligible compared with the weight being supported, the force exerted is uniform with respect to horizontal distance, and the result is a parabola, as discussed below (although the term "catenary" is often still used, in an informal sense). If the cable is heavy then the resulting curve is between a catenary and a parabola.
The catenary produced by gravity provides an advantage to heavy anchor rodes. An anchor rode (or anchor line) usually consists of chain or cable or both. Anchor rodes are used by ships, oil rigs, docks, floating wind turbines, and other marine equipment which must be anchored to the seabed.
When the rope is slack, the catenary curve presents a lower angle of pull on the anchor or mooring device than would be the case if it were nearly straight. This enhances the performance of the anchor and raises the level of force it will resist before dragging. To maintain the catenary shape in the presence of wind, a heavy chain is needed, so that only larger ships in deeper water can rely on this effect. Smaller boats also rely on catenary to maintain maximum holding power.
The equation of a catenary in Cartesian coordinates has the form
where cosh is the hyperbolic cosine function, and where a is the distance of the lowest point above the x axis. All catenary curves are similar to each other, since changing the parameter a is equivalent to a uniform scaling of the curve.
The Whewell equation for the catenary is
where φ {\displaystyle \varphi } is the tangential angle and s the arc length.
Differentiating gives
and eliminating φ {\displaystyle \varphi } gives the Cesàro equation
where κ {\displaystyle \kappa } is the curvature.
The radius of curvature is then
which is the length of the normal between the curve and the x-axis.
When a parabola is rolled along a straight line, the roulette curve traced by its focus is a catenary. The envelope of the directrix of the parabola is also a catenary. The involute from the vertex, that is the roulette traced by a point starting at the vertex when a line is rolled on a catenary, is the tractrix.
Another roulette, formed by rolling a line on a catenary, is another line. This implies that square wheels can roll perfectly smoothly on a road made of a series of bumps in the shape of an inverted catenary curve. The wheels can be any regular polygon except a triangle, but the catenary must have parameters corresponding to the shape and dimensions of the wheels.
Over any horizontal interval, the ratio of the area under the catenary to its length equals a, independent of the interval selected. The catenary is the only plane curve other than a horizontal line with this property. Also, the geometric centroid of the area under a stretch of catenary is the midpoint of the perpendicular segment connecting the centroid of the curve itself and the x-axis.
A moving charge in a uniform electric field travels along a catenary (which tends to a parabola if the charge velocity is much less than the speed of light c).
The surface of revolution with fixed radii at either end that has minimum surface area is a catenary revolved about the x-axis.
In the mathematical model the chain (or cord, cable, rope, string, etc.) is idealized by assuming that it is so thin that it can be regarded as a curve and that it is so flexible any force of tension exerted by the chain is parallel to the chain. The analysis of the curve for an optimal arch is similar except that the forces of tension become forces of compression and everything is inverted. An underlying principle is that the chain may be considered a rigid body once it has attained equilibrium. Equations which define the shape of the curve and the tension of the chain at each point may be derived by a careful inspection of the various forces acting on a segment using the fact that these forces must be in balance if the chain is in static equilibrium.
Let the path followed by the chain be given parametrically by r = (x, y) = (x(s), y(s)) where s represents arc length and r is the position vector. This is the natural parameterization and has the property that
where u is a unit tangent vector.
A differential equation for the curve may be derived as follows. Let c be the lowest point on the chain, called the vertex of the catenary. The slope dy/dx of the curve is zero at c since it is a minimum point. Assume r is to the right of c since the other case is implied by symmetry. The forces acting on the section of the chain from c to r are the tension of the chain at c, the tension of the chain at r, and the weight of the chain. The tension at c is tangent to the curve at c and is therefore horizontal without any vertical component and it pulls the section to the left so it may be written (−T0, 0) where T0 is the magnitude of the force. The tension at r is parallel to the curve at r and pulls the section to the right. The tension at r can be split into two components so it may be written Tu = (T cos φ, T sin φ), where T is the magnitude of the force and φ is the angle between the curve at r and the x-axis (see tangential angle). Finally, the weight of the chain is represented by (0, −λgs) where λ is the mass per unit length, g is the gravitational field strength and s is the length of the segment of chain between c and r.
The chain is in equilibrium so the sum of three forces is 0, therefore
and
and dividing these gives
It is convenient to write
which is the length of chain whose weight is equal in magnitude to the tension at c. Then
is an equation defining the curve.
The horizontal component of the tension, T cos φ = T0 is constant and the vertical component of the tension, T sin φ = λgs is proportional to the length of chain between r and the vertex.
After deriving the equations of the curve (in the next section) y = a cosh ( x a ) {\textstyle y=a\cosh \left({\frac {x}{a}}\right)} , one can plug the equation back to obtain the simple equation T = λ g s / sin φ = λ g y {\displaystyle T=\lambda gs/\sin \varphi =\lambda gy} .
The differential equation given above can be solved to produce equations for the curve.
From
the formula for arc length gives
Then
and
The second of these equations can be integrated to give
and by shifting the position of the x-axis, β can be taken to be 0. Then
The x-axis thus chosen is called the directrix of the catenary.
It follows that the magnitude of the tension at a point (x, y) is T = λgy, which is proportional to the distance between the point and the directrix.
This tension may also be expressed as T = T0 y/a .
The integral of the expression for dx/ds can be found using standard techniques, giving
and, again, by shifting the position of the y-axis, α can be taken to be 0. Then
The y-axis thus chosen passes through the vertex and is called the axis of the catenary.
These results can be used to eliminate s giving
The differential equation can be solved using a different approach. From
it follows that
and
Integrating gives,
and
As before, the x and y-axes can be shifted so α and β can be taken to be 0. Then
and taking the reciprocal of both sides
Adding and subtracting the last two equations then gives the solution
and
In general the parameter a is the position of the axis. The equation can be determined in this case as follows:
Relabel if necessary so that P1 is to the left of P2 and let H be the horizontal and v be the vertical distance from P1 to P2. Translate the axes so that the vertex of the catenary lies on the y-axis and its height a is adjusted so the catenary satisfies the standard equation of the curve
and let the coordinates of P1 and P2 be (x1, y1) and (x2, y2) respectively. The curve passes through these points, so the difference of height is
and the length of the curve from P1 to P2 is
When L − v is expanded using these expressions the result is
so
This is a transcendental equation in a and must be solved numerically. Since sinh ( x ) / x {\displaystyle \sinh(x)/x} is strictly monotonic on x > 0 {\displaystyle x>0} , there is at most one solution with a > 0 and so there is at most one position of equilibrium.
However, if both ends of the curve (P1 and P2) are at the same level (y1 = y2), it can be shown that
where L is the total length of the curve between P1 and P2 and h is the sag (vertical distance between P1, P2 and the vertex of the curve).
It can also be shown that
and
where H is the horizontal distance between P1 and P2 which are located at the same level (H = x2 − x1).
The horizontal traction force at P1 and P2 is T0 = λga, where λ is the mass per unit length of the chain or cable.
Consider a chain of length L {\displaystyle L} suspended from two points of equal height and at distance D {\displaystyle D} . The curve has to minimize its potential energy
and is subject to the constraint
The modified Lagrangian is therefore
where λ {\displaystyle \lambda } is the Lagrange multiplier to be determined. As the independent variable x {\displaystyle x} does not appear in the Lagrangian, we can use the Beltrami identity
where C {\displaystyle C} is an integration constant, in order to obtain a first integral
This is an ordinary first order differential equation that can be solved by the method of separation of variables. Its solution is the usual hyperbolic cosine where the parameters are obtained from the constraints.
If the density of the chain is variable then the analysis above can be adapted to produce equations for the curve given the density, or given the curve to find the density.
Let w denote the weight per unit length of the chain, then the weight of the chain has magnitude
where the limits of integration are c and r. Balancing forces as in the uniform chain produces
and
and therefore
Differentiation then gives
In terms of φ and the radius of curvature ρ this becomes
A similar analysis can be done to find the curve followed by the cable supporting a suspension bridge with a horizontal roadway. If the weight of the roadway per unit length is w and the weight of the cable and the wire supporting the bridge is negligible in comparison, then the weight on the cable (see the figure in Catenary#Model of chains and arches) from c to r is wx where x is the horizontal distance between c and r. Proceeding as before gives the differential equation
This is solved by simple integration to get
and so the cable follows a parabola. If the weight of the cable and supporting wires is not negligible then the analysis is more complex.
In a catenary of equal strength, the cable is strengthened according to the magnitude of the tension at each point, so its resistance to breaking is constant along its length. Assuming that the strength of the cable is proportional to its density per unit length, the weight, w, per unit length of the chain can be written T/c, where c is constant, and the analysis for nonuniform chains can be applied.
In this case the equations for tension are
Combining gives
and by differentiation
where ρ is the radius of curvature.
The solution to this is
In this case, the curve has vertical asymptotes and this limits the span to πc. Other relations are
The curve was studied 1826 by Davies Gilbert and, apparently independently, by Gaspard-Gustave Coriolis in 1836.
Recently, it was shown that this type of catenary could act as a building block of electromagnetic metasurface and was known as "catenary of equal phase gradient".
In an elastic catenary, the chain is replaced by a spring which can stretch in response to tension. The spring is assumed to stretch in accordance with Hooke's Law. Specifically, if p is the natural length of a section of spring, then the length of the spring with tension T applied has length
where E is a constant equal to kp, where k is the stiffness of the spring. In the catenary the value of T is variable, but ratio remains valid at a local level, so
The curve followed by an elastic spring can now be derived following a similar method as for the inelastic spring.
The equations for tension of the spring are
and
from which
where p is the natural length of the segment from c to r and λ0 is the mass per unit length of the spring with no tension and g is the gravitational field strength. Write
so
Then
from which
Integrating gives the parametric equations
Again, the x and y-axes can be shifted so α and β can be taken to be 0. So
are parametric equations for the curve. At the rigid limit where E is large, the shape of the curve reduces to that of a non-elastic chain.
With no assumptions being made regarding the force G acting on the chain, the following analysis can be made.
First, let T = T(s) be the force of tension as a function of s. The chain is flexible so it can only exert a force parallel to itself. Since tension is defined as the force that the chain exerts on itself, T must be parallel to the chain. In other words,
where T is the magnitude of T and u is the unit tangent vector.
Second, let G = G(s) be the external force per unit length acting on a small segment of a chain as a function of s. The forces acting on the segment of the chain between s and s + Δs are the force of tension T(s + Δs) at one end of the segment, the nearly opposite force −T(s) at the other end, and the external force acting on the segment which is approximately GΔs. These forces must balance so
Divide by Δs and take the limit as Δs → 0 to obtain
These equations can be used as the starting point in the analysis of a flexible chain acting under any external force. In the case of the standard catenary, G = (0, −λg) where the chain has mass λ per unit length and g is the gravitational field strength. | [
{
"paragraph_id": 0,
"text": "In physics and geometry, a catenary (US: /ˈkætənɛri/ KAT-ən-err-ee, UK: /kəˈtiːnəri/ kə-TEE-nər-ee) is the curve that an idealized hanging chain or cable assumes under its own weight when supported only at its ends in a uniform gravitational field.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The catenary curve has a U-like shape, superficially similar in appearance to a parabola, which it is not.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The curve appears in the design of certain types of arches and as a cross section of the catenoid—the shape assumed by a soap film bounded by two parallel circular rings.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The catenary is also called the alysoid, chainette, or, particularly in the materials sciences, funicular. Rope statics describes catenaries in a classic statics problem involving a hanging rope.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Mathematically, the catenary curve is the graph of the hyperbolic cosine function. The surface of revolution of the catenary curve, the catenoid, is a minimal surface, specifically a minimal surface of revolution. A hanging chain will assume a shape of least potential energy which is a catenary. Galileo Galilei in 1638 discussed the catenary in the book Two New Sciences recognizing that it was different from a parabola. The mathematical properties of the catenary curve were studied by Robert Hooke in the 1670s, and its equation was derived by Leibniz, Huygens and Johann Bernoulli in 1691.",
"title": ""
},
{
"paragraph_id": 5,
"text": "Catenaries and related curves are used in architecture and engineering (e.g., in the design of bridges and arches so that forces do not result in bending moments). In the offshore oil and gas industry, \"catenary\" refers to a steel catenary riser, a pipeline suspended between a production platform and the seabed that adopts an approximate catenary shape. In the rail industry it refers to the overhead wiring that transfers power to trains. (This often supports a contact wire, in which case it does not follow a true catenary curve.)",
"title": ""
},
{
"paragraph_id": 6,
"text": "In optics and electromagnetics, the hyperbolic cosine and sine functions are basic solutions to Maxwell's equations. The symmetric modes consisting of two evanescent waves would form a catenary shape.",
"title": ""
},
{
"paragraph_id": 7,
"text": "The word \"catenary\" is derived from the Latin word catēna, which means \"chain\". The English word \"catenary\" is usually attributed to Thomas Jefferson, who wrote in a letter to Thomas Paine on the construction of an arch for a bridge:",
"title": "History"
},
{
"paragraph_id": 8,
"text": "I have lately received from Italy a treatise on the equilibrium of arches, by the Abbé Mascheroni. It appears to be a very scientifical work. I have not yet had time to engage in it; but I find that the conclusions of his demonstrations are, that every part of the catenary is in perfect equilibrium.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "It is often said that Galileo thought the curve of a hanging chain was parabolic. However, in his Two New Sciences (1638), Galileo wrote that a hanging cord is only an approximate parabola, correctly observing that this approximation improves in accuracy as the curvature gets smaller and is almost exact when the elevation is less than 45°. The fact that the curve followed by a chain is not a parabola was proven by Joachim Jungius (1587–1657); this result was published posthumously in 1669.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "The application of the catenary to the construction of arches is attributed to Robert Hooke, whose \"true mathematical and mechanical form\" in the context of the rebuilding of St Paul's Cathedral alluded to a catenary. Some much older arches approximate catenaries, an example of which is the Arch of Taq-i Kisra in Ctesiphon.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "In 1671, Hooke announced to the Royal Society that he had solved the problem of the optimal shape of an arch, and in 1675 published an encrypted solution as a Latin anagram in an appendix to his Description of Helioscopes, where he wrote that he had found \"a true mathematical and mechanical form of all manner of Arches for Building.\" He did not publish the solution to this anagram in his lifetime, but in 1705 his executor provided it as ut pendet continuum flexile, sic stabit contiguum rigidum inversum, meaning \"As hangs a flexible cable so, inverted, stand the touching pieces of an arch.\"",
"title": "History"
},
{
"paragraph_id": 12,
"text": "In 1691, Gottfried Leibniz, Christiaan Huygens, and Johann Bernoulli derived the equation in response to a challenge by Jakob Bernoulli; their solutions were published in the Acta Eruditorum for June 1691. David Gregory wrote a treatise on the catenary in 1697 in which he provided an incorrect derivation of the correct differential equation.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "Euler proved in 1744 that the catenary is the curve which, when rotated about the x-axis, gives the surface of minimum surface area (the catenoid) for the given bounding circles. Nicolas Fuss gave equations describing the equilibrium of a chain under any force in 1796.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "Catenary arches are often used in the construction of kilns. To create the desired curve, the shape of a hanging chain of the desired dimensions is transferred to a form which is then used as a guide for the placement of bricks or other building material.",
"title": "Inverted catenary arch"
},
{
"paragraph_id": 15,
"text": "The Gateway Arch in St. Louis, Missouri, United States is sometimes said to be an (inverted) catenary, but this is incorrect. It is close to a more general curve called a flattened catenary, with equation y = A cosh(Bx), which is a catenary if AB = 1. While a catenary is the ideal shape for a freestanding arch of constant thickness, the Gateway Arch is narrower near the top. According to the U.S. National Historic Landmark nomination for the arch, it is a \"weighted catenary\" instead. Its shape corresponds to the shape that a weighted chain, having lighter links in the middle, would form. The logo for McDonald's, the Golden Arches, while intended to be two joined parabolas, is also based on the catenary.",
"title": "Inverted catenary arch"
},
{
"paragraph_id": 16,
"text": "",
"title": "Inverted catenary arch"
},
{
"paragraph_id": 17,
"text": "In free-hanging chains, the force exerted is uniform with respect to length of the chain, and so the chain follows the catenary curve. The same is true of a simple suspension bridge or \"catenary bridge,\" where the roadway follows the cable.",
"title": "Catenary bridges"
},
{
"paragraph_id": 18,
"text": "A stressed ribbon bridge is a more sophisticated structure with the same catenary shape.",
"title": "Catenary bridges"
},
{
"paragraph_id": 19,
"text": "However, in a suspension bridge with a suspended roadway, the chains or cables support the weight of the bridge, and so do not hang freely. In most cases the roadway is flat, so when the weight of the cable is negligible compared with the weight being supported, the force exerted is uniform with respect to horizontal distance, and the result is a parabola, as discussed below (although the term \"catenary\" is often still used, in an informal sense). If the cable is heavy then the resulting curve is between a catenary and a parabola.",
"title": "Catenary bridges"
},
{
"paragraph_id": 20,
"text": "The catenary produced by gravity provides an advantage to heavy anchor rodes. An anchor rode (or anchor line) usually consists of chain or cable or both. Anchor rodes are used by ships, oil rigs, docks, floating wind turbines, and other marine equipment which must be anchored to the seabed.",
"title": "Anchoring of marine objects"
},
{
"paragraph_id": 21,
"text": "When the rope is slack, the catenary curve presents a lower angle of pull on the anchor or mooring device than would be the case if it were nearly straight. This enhances the performance of the anchor and raises the level of force it will resist before dragging. To maintain the catenary shape in the presence of wind, a heavy chain is needed, so that only larger ships in deeper water can rely on this effect. Smaller boats also rely on catenary to maintain maximum holding power.",
"title": "Anchoring of marine objects"
},
{
"paragraph_id": 22,
"text": "The equation of a catenary in Cartesian coordinates has the form",
"title": "Mathematical description"
},
{
"paragraph_id": 23,
"text": "where cosh is the hyperbolic cosine function, and where a is the distance of the lowest point above the x axis. All catenary curves are similar to each other, since changing the parameter a is equivalent to a uniform scaling of the curve.",
"title": "Mathematical description"
},
{
"paragraph_id": 24,
"text": "The Whewell equation for the catenary is",
"title": "Mathematical description"
},
{
"paragraph_id": 25,
"text": "where φ {\\displaystyle \\varphi } is the tangential angle and s the arc length.",
"title": "Mathematical description"
},
{
"paragraph_id": 26,
"text": "Differentiating gives",
"title": "Mathematical description"
},
{
"paragraph_id": 27,
"text": "and eliminating φ {\\displaystyle \\varphi } gives the Cesàro equation",
"title": "Mathematical description"
},
{
"paragraph_id": 28,
"text": "where κ {\\displaystyle \\kappa } is the curvature.",
"title": "Mathematical description"
},
{
"paragraph_id": 29,
"text": "The radius of curvature is then",
"title": "Mathematical description"
},
{
"paragraph_id": 30,
"text": "which is the length of the normal between the curve and the x-axis.",
"title": "Mathematical description"
},
{
"paragraph_id": 31,
"text": "When a parabola is rolled along a straight line, the roulette curve traced by its focus is a catenary. The envelope of the directrix of the parabola is also a catenary. The involute from the vertex, that is the roulette traced by a point starting at the vertex when a line is rolled on a catenary, is the tractrix.",
"title": "Mathematical description"
},
{
"paragraph_id": 32,
"text": "Another roulette, formed by rolling a line on a catenary, is another line. This implies that square wheels can roll perfectly smoothly on a road made of a series of bumps in the shape of an inverted catenary curve. The wheels can be any regular polygon except a triangle, but the catenary must have parameters corresponding to the shape and dimensions of the wheels.",
"title": "Mathematical description"
},
{
"paragraph_id": 33,
"text": "Over any horizontal interval, the ratio of the area under the catenary to its length equals a, independent of the interval selected. The catenary is the only plane curve other than a horizontal line with this property. Also, the geometric centroid of the area under a stretch of catenary is the midpoint of the perpendicular segment connecting the centroid of the curve itself and the x-axis.",
"title": "Mathematical description"
},
{
"paragraph_id": 34,
"text": "A moving charge in a uniform electric field travels along a catenary (which tends to a parabola if the charge velocity is much less than the speed of light c).",
"title": "Mathematical description"
},
{
"paragraph_id": 35,
"text": "The surface of revolution with fixed radii at either end that has minimum surface area is a catenary revolved about the x-axis.",
"title": "Mathematical description"
},
{
"paragraph_id": 36,
"text": "In the mathematical model the chain (or cord, cable, rope, string, etc.) is idealized by assuming that it is so thin that it can be regarded as a curve and that it is so flexible any force of tension exerted by the chain is parallel to the chain. The analysis of the curve for an optimal arch is similar except that the forces of tension become forces of compression and everything is inverted. An underlying principle is that the chain may be considered a rigid body once it has attained equilibrium. Equations which define the shape of the curve and the tension of the chain at each point may be derived by a careful inspection of the various forces acting on a segment using the fact that these forces must be in balance if the chain is in static equilibrium.",
"title": "Analysis"
},
{
"paragraph_id": 37,
"text": "Let the path followed by the chain be given parametrically by r = (x, y) = (x(s), y(s)) where s represents arc length and r is the position vector. This is the natural parameterization and has the property that",
"title": "Analysis"
},
{
"paragraph_id": 38,
"text": "where u is a unit tangent vector.",
"title": "Analysis"
},
{
"paragraph_id": 39,
"text": "A differential equation for the curve may be derived as follows. Let c be the lowest point on the chain, called the vertex of the catenary. The slope dy/dx of the curve is zero at c since it is a minimum point. Assume r is to the right of c since the other case is implied by symmetry. The forces acting on the section of the chain from c to r are the tension of the chain at c, the tension of the chain at r, and the weight of the chain. The tension at c is tangent to the curve at c and is therefore horizontal without any vertical component and it pulls the section to the left so it may be written (−T0, 0) where T0 is the magnitude of the force. The tension at r is parallel to the curve at r and pulls the section to the right. The tension at r can be split into two components so it may be written Tu = (T cos φ, T sin φ), where T is the magnitude of the force and φ is the angle between the curve at r and the x-axis (see tangential angle). Finally, the weight of the chain is represented by (0, −λgs) where λ is the mass per unit length, g is the gravitational field strength and s is the length of the segment of chain between c and r.",
"title": "Analysis"
},
{
"paragraph_id": 40,
"text": "The chain is in equilibrium so the sum of three forces is 0, therefore",
"title": "Analysis"
},
{
"paragraph_id": 41,
"text": "and",
"title": "Analysis"
},
{
"paragraph_id": 42,
"text": "and dividing these gives",
"title": "Analysis"
},
{
"paragraph_id": 43,
"text": "It is convenient to write",
"title": "Analysis"
},
{
"paragraph_id": 44,
"text": "which is the length of chain whose weight is equal in magnitude to the tension at c. Then",
"title": "Analysis"
},
{
"paragraph_id": 45,
"text": "is an equation defining the curve.",
"title": "Analysis"
},
{
"paragraph_id": 46,
"text": "The horizontal component of the tension, T cos φ = T0 is constant and the vertical component of the tension, T sin φ = λgs is proportional to the length of chain between r and the vertex.",
"title": "Analysis"
},
{
"paragraph_id": 47,
"text": "After deriving the equations of the curve (in the next section) y = a cosh ( x a ) {\\textstyle y=a\\cosh \\left({\\frac {x}{a}}\\right)} , one can plug the equation back to obtain the simple equation T = λ g s / sin φ = λ g y {\\displaystyle T=\\lambda gs/\\sin \\varphi =\\lambda gy} .",
"title": "Analysis"
},
{
"paragraph_id": 48,
"text": "The differential equation given above can be solved to produce equations for the curve.",
"title": "Analysis"
},
{
"paragraph_id": 49,
"text": "From",
"title": "Analysis"
},
{
"paragraph_id": 50,
"text": "the formula for arc length gives",
"title": "Analysis"
},
{
"paragraph_id": 51,
"text": "Then",
"title": "Analysis"
},
{
"paragraph_id": 52,
"text": "and",
"title": "Analysis"
},
{
"paragraph_id": 53,
"text": "The second of these equations can be integrated to give",
"title": "Analysis"
},
{
"paragraph_id": 54,
"text": "and by shifting the position of the x-axis, β can be taken to be 0. Then",
"title": "Analysis"
},
{
"paragraph_id": 55,
"text": "The x-axis thus chosen is called the directrix of the catenary.",
"title": "Analysis"
},
{
"paragraph_id": 56,
"text": "It follows that the magnitude of the tension at a point (x, y) is T = λgy, which is proportional to the distance between the point and the directrix.",
"title": "Analysis"
},
{
"paragraph_id": 57,
"text": "This tension may also be expressed as T = T0 y/a .",
"title": "Analysis"
},
{
"paragraph_id": 58,
"text": "The integral of the expression for dx/ds can be found using standard techniques, giving",
"title": "Analysis"
},
{
"paragraph_id": 59,
"text": "and, again, by shifting the position of the y-axis, α can be taken to be 0. Then",
"title": "Analysis"
},
{
"paragraph_id": 60,
"text": "The y-axis thus chosen passes through the vertex and is called the axis of the catenary.",
"title": "Analysis"
},
{
"paragraph_id": 61,
"text": "These results can be used to eliminate s giving",
"title": "Analysis"
},
{
"paragraph_id": 62,
"text": "The differential equation can be solved using a different approach. From",
"title": "Analysis"
},
{
"paragraph_id": 63,
"text": "it follows that",
"title": "Analysis"
},
{
"paragraph_id": 64,
"text": "and",
"title": "Analysis"
},
{
"paragraph_id": 65,
"text": "Integrating gives,",
"title": "Analysis"
},
{
"paragraph_id": 66,
"text": "and",
"title": "Analysis"
},
{
"paragraph_id": 67,
"text": "As before, the x and y-axes can be shifted so α and β can be taken to be 0. Then",
"title": "Analysis"
},
{
"paragraph_id": 68,
"text": "and taking the reciprocal of both sides",
"title": "Analysis"
},
{
"paragraph_id": 69,
"text": "Adding and subtracting the last two equations then gives the solution",
"title": "Analysis"
},
{
"paragraph_id": 70,
"text": "and",
"title": "Analysis"
},
{
"paragraph_id": 71,
"text": "In general the parameter a is the position of the axis. The equation can be determined in this case as follows:",
"title": "Analysis"
},
{
"paragraph_id": 72,
"text": "Relabel if necessary so that P1 is to the left of P2 and let H be the horizontal and v be the vertical distance from P1 to P2. Translate the axes so that the vertex of the catenary lies on the y-axis and its height a is adjusted so the catenary satisfies the standard equation of the curve",
"title": "Analysis"
},
{
"paragraph_id": 73,
"text": "and let the coordinates of P1 and P2 be (x1, y1) and (x2, y2) respectively. The curve passes through these points, so the difference of height is",
"title": "Analysis"
},
{
"paragraph_id": 74,
"text": "and the length of the curve from P1 to P2 is",
"title": "Analysis"
},
{
"paragraph_id": 75,
"text": "When L − v is expanded using these expressions the result is",
"title": "Analysis"
},
{
"paragraph_id": 76,
"text": "so",
"title": "Analysis"
},
{
"paragraph_id": 77,
"text": "This is a transcendental equation in a and must be solved numerically. Since sinh ( x ) / x {\\displaystyle \\sinh(x)/x} is strictly monotonic on x > 0 {\\displaystyle x>0} , there is at most one solution with a > 0 and so there is at most one position of equilibrium.",
"title": "Analysis"
},
{
"paragraph_id": 78,
"text": "However, if both ends of the curve (P1 and P2) are at the same level (y1 = y2), it can be shown that",
"title": "Analysis"
},
{
"paragraph_id": 79,
"text": "where L is the total length of the curve between P1 and P2 and h is the sag (vertical distance between P1, P2 and the vertex of the curve).",
"title": "Analysis"
},
{
"paragraph_id": 80,
"text": "It can also be shown that",
"title": "Analysis"
},
{
"paragraph_id": 81,
"text": "and",
"title": "Analysis"
},
{
"paragraph_id": 82,
"text": "where H is the horizontal distance between P1 and P2 which are located at the same level (H = x2 − x1).",
"title": "Analysis"
},
{
"paragraph_id": 83,
"text": "The horizontal traction force at P1 and P2 is T0 = λga, where λ is the mass per unit length of the chain or cable.",
"title": "Analysis"
},
{
"paragraph_id": 84,
"text": "Consider a chain of length L {\\displaystyle L} suspended from two points of equal height and at distance D {\\displaystyle D} . The curve has to minimize its potential energy",
"title": "Variational formulation"
},
{
"paragraph_id": 85,
"text": "and is subject to the constraint",
"title": "Variational formulation"
},
{
"paragraph_id": 86,
"text": "The modified Lagrangian is therefore",
"title": "Variational formulation"
},
{
"paragraph_id": 87,
"text": "where λ {\\displaystyle \\lambda } is the Lagrange multiplier to be determined. As the independent variable x {\\displaystyle x} does not appear in the Lagrangian, we can use the Beltrami identity",
"title": "Variational formulation"
},
{
"paragraph_id": 88,
"text": "where C {\\displaystyle C} is an integration constant, in order to obtain a first integral",
"title": "Variational formulation"
},
{
"paragraph_id": 89,
"text": "This is an ordinary first order differential equation that can be solved by the method of separation of variables. Its solution is the usual hyperbolic cosine where the parameters are obtained from the constraints.",
"title": "Variational formulation"
},
{
"paragraph_id": 90,
"text": "If the density of the chain is variable then the analysis above can be adapted to produce equations for the curve given the density, or given the curve to find the density.",
"title": "Generalizations with vertical force"
},
{
"paragraph_id": 91,
"text": "Let w denote the weight per unit length of the chain, then the weight of the chain has magnitude",
"title": "Generalizations with vertical force"
},
{
"paragraph_id": 92,
"text": "where the limits of integration are c and r. Balancing forces as in the uniform chain produces",
"title": "Generalizations with vertical force"
},
{
"paragraph_id": 93,
"text": "and",
"title": "Generalizations with vertical force"
},
{
"paragraph_id": 94,
"text": "and therefore",
"title": "Generalizations with vertical force"
},
{
"paragraph_id": 95,
"text": "Differentiation then gives",
"title": "Generalizations with vertical force"
},
{
"paragraph_id": 96,
"text": "In terms of φ and the radius of curvature ρ this becomes",
"title": "Generalizations with vertical force"
},
{
"paragraph_id": 97,
"text": "A similar analysis can be done to find the curve followed by the cable supporting a suspension bridge with a horizontal roadway. If the weight of the roadway per unit length is w and the weight of the cable and the wire supporting the bridge is negligible in comparison, then the weight on the cable (see the figure in Catenary#Model of chains and arches) from c to r is wx where x is the horizontal distance between c and r. Proceeding as before gives the differential equation",
"title": "Generalizations with vertical force"
},
{
"paragraph_id": 98,
"text": "This is solved by simple integration to get",
"title": "Generalizations with vertical force"
},
{
"paragraph_id": 99,
"text": "and so the cable follows a parabola. If the weight of the cable and supporting wires is not negligible then the analysis is more complex.",
"title": "Generalizations with vertical force"
},
{
"paragraph_id": 100,
"text": "In a catenary of equal strength, the cable is strengthened according to the magnitude of the tension at each point, so its resistance to breaking is constant along its length. Assuming that the strength of the cable is proportional to its density per unit length, the weight, w, per unit length of the chain can be written T/c, where c is constant, and the analysis for nonuniform chains can be applied.",
"title": "Generalizations with vertical force"
},
{
"paragraph_id": 101,
"text": "In this case the equations for tension are",
"title": "Generalizations with vertical force"
},
{
"paragraph_id": 102,
"text": "Combining gives",
"title": "Generalizations with vertical force"
},
{
"paragraph_id": 103,
"text": "and by differentiation",
"title": "Generalizations with vertical force"
},
{
"paragraph_id": 104,
"text": "where ρ is the radius of curvature.",
"title": "Generalizations with vertical force"
},
{
"paragraph_id": 105,
"text": "The solution to this is",
"title": "Generalizations with vertical force"
},
{
"paragraph_id": 106,
"text": "In this case, the curve has vertical asymptotes and this limits the span to πc. Other relations are",
"title": "Generalizations with vertical force"
},
{
"paragraph_id": 107,
"text": "The curve was studied 1826 by Davies Gilbert and, apparently independently, by Gaspard-Gustave Coriolis in 1836.",
"title": "Generalizations with vertical force"
},
{
"paragraph_id": 108,
"text": "Recently, it was shown that this type of catenary could act as a building block of electromagnetic metasurface and was known as \"catenary of equal phase gradient\".",
"title": "Generalizations with vertical force"
},
{
"paragraph_id": 109,
"text": "In an elastic catenary, the chain is replaced by a spring which can stretch in response to tension. The spring is assumed to stretch in accordance with Hooke's Law. Specifically, if p is the natural length of a section of spring, then the length of the spring with tension T applied has length",
"title": "Generalizations with vertical force"
},
{
"paragraph_id": 110,
"text": "where E is a constant equal to kp, where k is the stiffness of the spring. In the catenary the value of T is variable, but ratio remains valid at a local level, so",
"title": "Generalizations with vertical force"
},
{
"paragraph_id": 111,
"text": "The curve followed by an elastic spring can now be derived following a similar method as for the inelastic spring.",
"title": "Generalizations with vertical force"
},
{
"paragraph_id": 112,
"text": "The equations for tension of the spring are",
"title": "Generalizations with vertical force"
},
{
"paragraph_id": 113,
"text": "and",
"title": "Generalizations with vertical force"
},
{
"paragraph_id": 114,
"text": "from which",
"title": "Generalizations with vertical force"
},
{
"paragraph_id": 115,
"text": "where p is the natural length of the segment from c to r and λ0 is the mass per unit length of the spring with no tension and g is the gravitational field strength. Write",
"title": "Generalizations with vertical force"
},
{
"paragraph_id": 116,
"text": "so",
"title": "Generalizations with vertical force"
},
{
"paragraph_id": 117,
"text": "Then",
"title": "Generalizations with vertical force"
},
{
"paragraph_id": 118,
"text": "from which",
"title": "Generalizations with vertical force"
},
{
"paragraph_id": 119,
"text": "Integrating gives the parametric equations",
"title": "Generalizations with vertical force"
},
{
"paragraph_id": 120,
"text": "Again, the x and y-axes can be shifted so α and β can be taken to be 0. So",
"title": "Generalizations with vertical force"
},
{
"paragraph_id": 121,
"text": "are parametric equations for the curve. At the rigid limit where E is large, the shape of the curve reduces to that of a non-elastic chain.",
"title": "Generalizations with vertical force"
},
{
"paragraph_id": 122,
"text": "With no assumptions being made regarding the force G acting on the chain, the following analysis can be made.",
"title": "Other generalizations"
},
{
"paragraph_id": 123,
"text": "First, let T = T(s) be the force of tension as a function of s. The chain is flexible so it can only exert a force parallel to itself. Since tension is defined as the force that the chain exerts on itself, T must be parallel to the chain. In other words,",
"title": "Other generalizations"
},
{
"paragraph_id": 124,
"text": "where T is the magnitude of T and u is the unit tangent vector.",
"title": "Other generalizations"
},
{
"paragraph_id": 125,
"text": "Second, let G = G(s) be the external force per unit length acting on a small segment of a chain as a function of s. The forces acting on the segment of the chain between s and s + Δs are the force of tension T(s + Δs) at one end of the segment, the nearly opposite force −T(s) at the other end, and the external force acting on the segment which is approximately GΔs. These forces must balance so",
"title": "Other generalizations"
},
{
"paragraph_id": 126,
"text": "Divide by Δs and take the limit as Δs → 0 to obtain",
"title": "Other generalizations"
},
{
"paragraph_id": 127,
"text": "These equations can be used as the starting point in the analysis of a flexible chain acting under any external force. In the case of the standard catenary, G = (0, −λg) where the chain has mass λ per unit length and g is the gravitational field strength.",
"title": "Other generalizations"
}
] | In physics and geometry, a catenary is the curve that an idealized hanging chain or cable assumes under its own weight when supported only at its ends in a uniform gravitational field. The catenary curve has a U-like shape, superficially similar in appearance to a parabola, which it is not. The curve appears in the design of certain types of arches and as a cross section of the catenoid—the shape assumed by a soap film bounded by two parallel circular rings. The catenary is also called the alysoid, chainette, or, particularly in the materials sciences, funicular. Rope statics describes catenaries in a classic statics problem involving a hanging rope. Mathematically, the catenary curve is the graph of the hyperbolic cosine function. The surface of revolution of the catenary curve, the catenoid, is a minimal surface, specifically a minimal surface of revolution. A hanging chain will assume a shape of least potential energy which is a catenary. Galileo Galilei in 1638 discussed the catenary in the book Two New Sciences recognizing that it was different from a parabola. The mathematical properties of the catenary curve were studied by Robert Hooke in the 1670s, and its equation was derived by Leibniz, Huygens and Johann Bernoulli in 1691. Catenaries and related curves are used in architecture and engineering. In the offshore oil and gas industry, "catenary" refers to a steel catenary riser, a pipeline suspended between a production platform and the seabed that adopts an approximate catenary shape. In the rail industry it refers to the overhead wiring that transfers power to trains. In optics and electromagnetics, the hyperbolic cosine and sine functions are basic solutions to Maxwell's equations. The symmetric modes consisting of two evanescent waves would form a catenary shape. | 2001-11-18T21:06:53Z | 2023-12-08T19:29:55Z | [
"Template:Mvar",
"Template:Cite book",
"Template:Mathworld",
"Template:Wikiquote",
"Template:Wikisource1911Enc",
"Template:Anchor",
"Template:Commons category",
"Template:About",
"Template:Short description",
"Template:Reflist",
"Template:Citation",
"Template:Small",
"Template:Cbignore",
"Template:MacTutor",
"Template:PlanetMath",
"Template:Mathematics and art",
"Template:IPAc-en",
"Template:Respell",
"Template:Blockquote",
"Template:Clear",
"Template:Cite web",
"Template:NHLS url",
"Template:Redirect",
"Template:Math",
"Template:Cite journal"
] | https://en.wikipedia.org/wiki/Catenary |
7,164 | Color temperature | Color temperature is a parameter describing the color of a visible light source by comparing it to the color of light emitted by an idealized opaque, non-reflective body. The temperature of the ideal emitter that matches the color most closely is defined as the color temperature of the original visible light source. Color temperature is usually measured in kelvins. The color temperature scale describes only the color of light emitted by a light source, which may actually be at a different (and often much lower) temperature.
Color temperature has applications in lighting, photography, videography, publishing, manufacturing, astrophysics and other fields. In practice, color temperature is most meaningful for light sources that correspond somewhat closely to the color of some black body, i.e., light in a range going from red to orange to yellow to white to bluish white. Although the concept of correlated color temperature extends the definition to any visible light, the color temperature of a green or a purple light rarely is useful information. Color temperature is conventionally expressed in kelvins, using the symbol K, a unit for absolute temperature.
Color temperatures over 5000 K are called "cool colors" (bluish), while lower color temperatures (2700–3000 K) are called "warm colors" (yellowish). "Warm" in this context is with respect to a traditional categorization of colors, not a reference to black body temperature. The hue-heat hypothesis states that low color temperatures will feel warmer while higher color temperatures will feel cooler. The spectral peak of warm-colored light is closer to infrared, and most natural warm-colored light sources emit significant infrared radiation. The fact that "warm" lighting in this sense actually has a "cooler" color temperature often leads to confusion.
The color temperature of the electromagnetic radiation emitted from an ideal black body is defined as its surface temperature in kelvins, or alternatively in micro reciprocal degrees (mired). This permits the definition of a standard by which light sources are compared.
To the extent that a hot surface emits thermal radiation but is not an ideal black-body radiator, the color temperature of the light is not the actual temperature of the surface. An incandescent lamp's light is thermal radiation, and the bulb approximates an ideal black-body radiator, so its color temperature is essentially the temperature of the filament. Thus a relatively low temperature emits a dull red and a high temperature emits the almost white of the traditional incandescent light bulb. Metal workers are able to judge the temperature of hot metals by their color, from dark red to orange-white and then white (see red heat).
Many other light sources, such as fluorescent lamps, or light emitting diodes (LEDs) emit light primarily by processes other than thermal radiation. This means that the emitted radiation does not follow the form of a black-body spectrum. These sources are assigned what is known as a correlated color temperature (CCT). CCT is the color temperature of a black-body radiator which to human color perception most closely matches the light from the lamp. Because such an approximation is not required for incandescent light, the CCT for an incandescent light is simply its unadjusted temperature, derived from comparison to a black-body radiator.
The Sun closely approximates a black-body radiator. The effective temperature, defined by the total radiative power per square unit, is 5772 K. The color temperature of sunlight above the atmosphere is about 5900 K.
The Sun may appear red, orange, yellow, or white from Earth, depending on its position in the sky. The changing color of the Sun over the course of the day is mainly a result of the scattering of sunlight and is not due to changes in black-body radiation. Rayleigh scattering of sunlight by Earth's atmosphere causes the blue color of the sky, which tends to scatter blue light more than red light.
Some daylight in the early morning and late afternoon (the golden hours) has a lower ("warmer") color temperature due to increased scattering of shorter-wavelength sunlight by atmospheric particulates – an optical phenomenon called the Tyndall effect.
Daylight has a spectrum similar to that of a black body with a correlated color temperature of 6500 K (D65 viewing standard) or 5500 K (daylight-balanced photographic film standard).
For colors based on black-body theory, blue occurs at higher temperatures, whereas red occurs at lower temperatures. This is the opposite of the cultural associations attributed to colors, in which "red" is "hot", and "blue" is "cold".
For lighting building interiors, it is often important to take into account the color temperature of illumination. A warmer (i.e., a lower color temperature) light is often used in public areas to promote relaxation, while a cooler (higher color temperature) light is used to enhance concentration, for example in schools and offices.
CCT dimming for LED technology is regarded as a difficult task, since binning, age and temperature drift effects of LEDs change the actual color value output. Here feedback loop systems are used, for example with color sensors, to actively monitor and control the color output of multiple color mixing LEDs.
In fishkeeping, color temperature has different functions and foci in the various branches.
In digital photography, the term color temperature sometimes refers to remapping of color values to simulate variations in ambient color temperature. Most digital cameras and raw image software provide presets simulating specific ambient values (e.g., sunny, cloudy, tungsten, etc.) while others allow explicit entry of white balance values in kelvins. These settings vary color values along the blue–yellow axis, while some software includes additional controls (sometimes labeled "tint") adding the magenta–green axis, and are to some extent arbitrary and a matter of artistic interpretation.
Photographic emulsion film does not respond to lighting color identically to the human retina or visual perception. An object that appears to the observer to be white may turn out to be very blue or orange in a photograph. The color balance may need to be corrected during printing to achieve a neutral color print. The extent of this correction is limited since color film normally has three layers sensitive to different colors and when used under the "wrong" light source, every layer may not respond proportionally, giving odd color casts in the shadows, although the mid-tones may have been correctly white-balanced under the enlarger. Light sources with discontinuous spectra, such as fluorescent tubes, cannot be fully corrected in printing either, since one of the layers may barely have recorded an image at all.
Photographic film is made for specific light sources (most commonly daylight film and tungsten film), and, used properly, will create a neutral color print. Matching the sensitivity of the film to the color temperature of the light source is one way to balance color. If tungsten film is used indoors with incandescent lamps, the yellowish-orange light of the tungsten incandescent lamps will appear as white (3200 K) in the photograph. Color negative film is almost always daylight-balanced, since it is assumed that color can be adjusted in printing (with limitations, see above). Color transparency film, being the final artefact in the process, has to be matched to the light source or filters must be used to correct color.
Filters on a camera lens, or color gels over the light source(s) may be used to correct color balance. When shooting with a bluish light (high color temperature) source such as on an overcast day, in the shade, in window light, or if using tungsten film with white or blue light, a yellowish-orange filter will correct this. For shooting with daylight film (calibrated to 5600 K) under warmer (low color temperature) light sources such as sunsets, candlelight or tungsten lighting, a bluish (e.g. #80A) filter may be used. More-subtle filters are needed to correct for the difference between, say 3200 K and 3400 K tungsten lamps or to correct for the slightly blue cast of some flash tubes, which may be 6000 K.
If there is more than one light source with varied color temperatures, one way to balance the color is to use daylight film and place color-correcting gel filters over each light source.
Photographers sometimes use color temperature meters. These are usually designed to read only two regions along the visible spectrum (red and blue); more expensive ones read three regions (red, green, and blue). However, they are ineffective with sources such as fluorescent or discharge lamps, whose light varies in color and may be harder to correct for. Because this light is often greenish, a magenta filter may correct it. More sophisticated colorimetry tools can be used if such meters are lacking.
In the desktop publishing industry, it is important to know a monitor's color temperature. Color matching software, such as Apple's ColorSync Utility for MacOS, measures a monitor's color temperature and then adjusts its settings accordingly. This enables on-screen color to more closely match printed color. Common monitor color temperatures, along with matching standard illuminants in parentheses, are as follows:
D50 is scientific shorthand for a standard illuminant: the daylight spectrum at a correlated color temperature of 5000 K. Similar definitions exist for D55, D65 and D75. Designations such as D50 are used to help classify color temperatures of light tables and viewing booths. When viewing a color slide at a light table, it is important that the light be balanced properly so that the colors are not shifted towards the red or blue.
Digital cameras, web graphics, DVDs, etc., are normally designed for a 6500 K color temperature. The sRGB standard commonly used for images on the Internet stipulates a 6500 K display white point.
The NTSC and PAL TV norms call for a compliant TV screen to display an electrically black and white signal (minimal color saturation) at a color temperature of 6500 K. On many consumer-grade televisions, there is a very noticeable deviation from this requirement. However, higher-end consumer-grade televisions can have their color temperatures adjusted to 6500 K by using a preprogrammed setting or a custom calibration. Current versions of ATSC explicitly call for the color temperature data to be included in the data stream, but old versions of ATSC allowed this data to be omitted. In this case, current versions of ATSC cite default colorimetry standards depending on the format. Both of the cited standards specify a 6500 K color temperature.
Most video and digital still cameras can adjust for color temperature by zooming into a white or neutral colored object and setting the manual "white balance" (telling the camera that "this object is white"); the camera then shows true white as white and adjusts all the other colors accordingly. White-balancing is necessary especially when indoors under fluorescent lighting and when moving the camera from one lighting situation to another. Most cameras also have an automatic white balance function that attempts to determine the color of the light and correct accordingly. While these settings were once unreliable, they are much improved in today's digital cameras and produce an accurate white balance in a wide variety of lighting situations.
Video camera operators can white-balance objects that are not white, downplaying the color of the object used for white-balancing. For instance, they can bring more warmth into a picture by white-balancing off something that is light blue, such as faded blue denim; in this way white-balancing can replace a filter or lighting gel when those are not available.
Cinematographers do not "white balance" in the same way as video camera operators; they use techniques such as filters, choice of film stock, pre-flashing, and, after shooting, color grading, both by exposure at the labs and also digitally. Cinematographers also work closely with set designers and lighting crews to achieve the desired color effects.
For artists, most pigments and papers have a cool or warm cast, as the human eye can detect even a minute amount of saturation. Gray mixed with yellow, orange, or red is a "warm gray". Green, blue, or purple create "cool grays". Note that this sense of temperature is the reverse of that of real temperature; bluer is described as "cooler" even though it corresponds to a higher-temperature black body.
Lighting designers sometimes select filters by color temperature, commonly to match light that is theoretically white. Since fixtures using discharge type lamps produce a light of a considerably higher color temperature than do tungsten lamps, using the two in conjunction could potentially produce a stark contrast, so sometimes fixtures with HID lamps, commonly producing light of 6000–7000 K, are fitted with 3200 K filters to emulate tungsten light. Fixtures with color mixing features or with multiple colors (if including 3200 K), are also capable of producing tungsten-like light. Color temperature may also be a factor when selecting lamps, since each is likely to have a different color temperature.
The CIE color rendering index (CRI) is a method to determine how well a light source's illumination of eight sample patches compares to the illumination provided by a reference source. Cited together, the CRI and CCT give a numerical estimate of what reference (ideal) light source best approximates a particular artificial light, and what the difference is.
Light sources and illuminants may be characterized by their spectral power distribution (SPD). The relative SPD curves provided by many manufacturers may have been produced using 10 nm increments or more on their spectroradiometer. The result is what would seem to be a smoother ("fuller spectrum") power distribution than the lamp actually has. Owing to their spiky distribution, much finer increments are advisable for taking measurements of fluorescent lights, and this requires more expensive equipment.
In astronomy, the color temperature is defined by the local slope of the SPD at a given wavelength, or, in practice, a wavelength range. Given, for example, the color magnitudes B and V which are calibrated to be equal for an A0V star (e.g. Vega), the stellar color temperature T C {\displaystyle T_{C}} is given by the temperature for which the color index B − V {\displaystyle B-V} of a black-body radiator fits the stellar one. Besides the B − V {\displaystyle B-V} , other color indices can be used as well. The color temperature (as well as the correlated color temperature defined above) may differ largely from the effective temperature given by the radiative flux of the stellar surface. For example, the color temperature of an A0V star is about 15000 K compared to an effective temperature of about 9500 K.
For most applications in astronomy (e.g., to place a star on the HR diagram or to determine the temperature of a model flux fitting an observed spectrum) the effective temperature is the quantity of interest. Various color-effective temperature relations exist in the literature. There relations also have smaller dependencies on other stellar parameters, such as the stellar metallicity and surface gravity | [
{
"paragraph_id": 0,
"text": "Color temperature is a parameter describing the color of a visible light source by comparing it to the color of light emitted by an idealized opaque, non-reflective body. The temperature of the ideal emitter that matches the color most closely is defined as the color temperature of the original visible light source. Color temperature is usually measured in kelvins. The color temperature scale describes only the color of light emitted by a light source, which may actually be at a different (and often much lower) temperature.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Color temperature has applications in lighting, photography, videography, publishing, manufacturing, astrophysics and other fields. In practice, color temperature is most meaningful for light sources that correspond somewhat closely to the color of some black body, i.e., light in a range going from red to orange to yellow to white to bluish white. Although the concept of correlated color temperature extends the definition to any visible light, the color temperature of a green or a purple light rarely is useful information. Color temperature is conventionally expressed in kelvins, using the symbol K, a unit for absolute temperature.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Color temperatures over 5000 K are called \"cool colors\" (bluish), while lower color temperatures (2700–3000 K) are called \"warm colors\" (yellowish). \"Warm\" in this context is with respect to a traditional categorization of colors, not a reference to black body temperature. The hue-heat hypothesis states that low color temperatures will feel warmer while higher color temperatures will feel cooler. The spectral peak of warm-colored light is closer to infrared, and most natural warm-colored light sources emit significant infrared radiation. The fact that \"warm\" lighting in this sense actually has a \"cooler\" color temperature often leads to confusion.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The color temperature of the electromagnetic radiation emitted from an ideal black body is defined as its surface temperature in kelvins, or alternatively in micro reciprocal degrees (mired). This permits the definition of a standard by which light sources are compared.",
"title": "Categorizing different lighting"
},
{
"paragraph_id": 4,
"text": "To the extent that a hot surface emits thermal radiation but is not an ideal black-body radiator, the color temperature of the light is not the actual temperature of the surface. An incandescent lamp's light is thermal radiation, and the bulb approximates an ideal black-body radiator, so its color temperature is essentially the temperature of the filament. Thus a relatively low temperature emits a dull red and a high temperature emits the almost white of the traditional incandescent light bulb. Metal workers are able to judge the temperature of hot metals by their color, from dark red to orange-white and then white (see red heat).",
"title": "Categorizing different lighting"
},
{
"paragraph_id": 5,
"text": "Many other light sources, such as fluorescent lamps, or light emitting diodes (LEDs) emit light primarily by processes other than thermal radiation. This means that the emitted radiation does not follow the form of a black-body spectrum. These sources are assigned what is known as a correlated color temperature (CCT). CCT is the color temperature of a black-body radiator which to human color perception most closely matches the light from the lamp. Because such an approximation is not required for incandescent light, the CCT for an incandescent light is simply its unadjusted temperature, derived from comparison to a black-body radiator.",
"title": "Categorizing different lighting"
},
{
"paragraph_id": 6,
"text": "The Sun closely approximates a black-body radiator. The effective temperature, defined by the total radiative power per square unit, is 5772 K. The color temperature of sunlight above the atmosphere is about 5900 K.",
"title": "Categorizing different lighting"
},
{
"paragraph_id": 7,
"text": "The Sun may appear red, orange, yellow, or white from Earth, depending on its position in the sky. The changing color of the Sun over the course of the day is mainly a result of the scattering of sunlight and is not due to changes in black-body radiation. Rayleigh scattering of sunlight by Earth's atmosphere causes the blue color of the sky, which tends to scatter blue light more than red light.",
"title": "Categorizing different lighting"
},
{
"paragraph_id": 8,
"text": "Some daylight in the early morning and late afternoon (the golden hours) has a lower (\"warmer\") color temperature due to increased scattering of shorter-wavelength sunlight by atmospheric particulates – an optical phenomenon called the Tyndall effect.",
"title": "Categorizing different lighting"
},
{
"paragraph_id": 9,
"text": "Daylight has a spectrum similar to that of a black body with a correlated color temperature of 6500 K (D65 viewing standard) or 5500 K (daylight-balanced photographic film standard).",
"title": "Categorizing different lighting"
},
{
"paragraph_id": 10,
"text": "For colors based on black-body theory, blue occurs at higher temperatures, whereas red occurs at lower temperatures. This is the opposite of the cultural associations attributed to colors, in which \"red\" is \"hot\", and \"blue\" is \"cold\".",
"title": "Categorizing different lighting"
},
{
"paragraph_id": 11,
"text": "For lighting building interiors, it is often important to take into account the color temperature of illumination. A warmer (i.e., a lower color temperature) light is often used in public areas to promote relaxation, while a cooler (higher color temperature) light is used to enhance concentration, for example in schools and offices.",
"title": "Applications"
},
{
"paragraph_id": 12,
"text": "CCT dimming for LED technology is regarded as a difficult task, since binning, age and temperature drift effects of LEDs change the actual color value output. Here feedback loop systems are used, for example with color sensors, to actively monitor and control the color output of multiple color mixing LEDs.",
"title": "Applications"
},
{
"paragraph_id": 13,
"text": "In fishkeeping, color temperature has different functions and foci in the various branches.",
"title": "Applications"
},
{
"paragraph_id": 14,
"text": "In digital photography, the term color temperature sometimes refers to remapping of color values to simulate variations in ambient color temperature. Most digital cameras and raw image software provide presets simulating specific ambient values (e.g., sunny, cloudy, tungsten, etc.) while others allow explicit entry of white balance values in kelvins. These settings vary color values along the blue–yellow axis, while some software includes additional controls (sometimes labeled \"tint\") adding the magenta–green axis, and are to some extent arbitrary and a matter of artistic interpretation.",
"title": "Applications"
},
{
"paragraph_id": 15,
"text": "Photographic emulsion film does not respond to lighting color identically to the human retina or visual perception. An object that appears to the observer to be white may turn out to be very blue or orange in a photograph. The color balance may need to be corrected during printing to achieve a neutral color print. The extent of this correction is limited since color film normally has three layers sensitive to different colors and when used under the \"wrong\" light source, every layer may not respond proportionally, giving odd color casts in the shadows, although the mid-tones may have been correctly white-balanced under the enlarger. Light sources with discontinuous spectra, such as fluorescent tubes, cannot be fully corrected in printing either, since one of the layers may barely have recorded an image at all.",
"title": "Applications"
},
{
"paragraph_id": 16,
"text": "Photographic film is made for specific light sources (most commonly daylight film and tungsten film), and, used properly, will create a neutral color print. Matching the sensitivity of the film to the color temperature of the light source is one way to balance color. If tungsten film is used indoors with incandescent lamps, the yellowish-orange light of the tungsten incandescent lamps will appear as white (3200 K) in the photograph. Color negative film is almost always daylight-balanced, since it is assumed that color can be adjusted in printing (with limitations, see above). Color transparency film, being the final artefact in the process, has to be matched to the light source or filters must be used to correct color.",
"title": "Applications"
},
{
"paragraph_id": 17,
"text": "Filters on a camera lens, or color gels over the light source(s) may be used to correct color balance. When shooting with a bluish light (high color temperature) source such as on an overcast day, in the shade, in window light, or if using tungsten film with white or blue light, a yellowish-orange filter will correct this. For shooting with daylight film (calibrated to 5600 K) under warmer (low color temperature) light sources such as sunsets, candlelight or tungsten lighting, a bluish (e.g. #80A) filter may be used. More-subtle filters are needed to correct for the difference between, say 3200 K and 3400 K tungsten lamps or to correct for the slightly blue cast of some flash tubes, which may be 6000 K.",
"title": "Applications"
},
{
"paragraph_id": 18,
"text": "If there is more than one light source with varied color temperatures, one way to balance the color is to use daylight film and place color-correcting gel filters over each light source.",
"title": "Applications"
},
{
"paragraph_id": 19,
"text": "Photographers sometimes use color temperature meters. These are usually designed to read only two regions along the visible spectrum (red and blue); more expensive ones read three regions (red, green, and blue). However, they are ineffective with sources such as fluorescent or discharge lamps, whose light varies in color and may be harder to correct for. Because this light is often greenish, a magenta filter may correct it. More sophisticated colorimetry tools can be used if such meters are lacking.",
"title": "Applications"
},
{
"paragraph_id": 20,
"text": "In the desktop publishing industry, it is important to know a monitor's color temperature. Color matching software, such as Apple's ColorSync Utility for MacOS, measures a monitor's color temperature and then adjusts its settings accordingly. This enables on-screen color to more closely match printed color. Common monitor color temperatures, along with matching standard illuminants in parentheses, are as follows:",
"title": "Applications"
},
{
"paragraph_id": 21,
"text": "D50 is scientific shorthand for a standard illuminant: the daylight spectrum at a correlated color temperature of 5000 K. Similar definitions exist for D55, D65 and D75. Designations such as D50 are used to help classify color temperatures of light tables and viewing booths. When viewing a color slide at a light table, it is important that the light be balanced properly so that the colors are not shifted towards the red or blue.",
"title": "Applications"
},
{
"paragraph_id": 22,
"text": "Digital cameras, web graphics, DVDs, etc., are normally designed for a 6500 K color temperature. The sRGB standard commonly used for images on the Internet stipulates a 6500 K display white point.",
"title": "Applications"
},
{
"paragraph_id": 23,
"text": "The NTSC and PAL TV norms call for a compliant TV screen to display an electrically black and white signal (minimal color saturation) at a color temperature of 6500 K. On many consumer-grade televisions, there is a very noticeable deviation from this requirement. However, higher-end consumer-grade televisions can have their color temperatures adjusted to 6500 K by using a preprogrammed setting or a custom calibration. Current versions of ATSC explicitly call for the color temperature data to be included in the data stream, but old versions of ATSC allowed this data to be omitted. In this case, current versions of ATSC cite default colorimetry standards depending on the format. Both of the cited standards specify a 6500 K color temperature.",
"title": "Applications"
},
{
"paragraph_id": 24,
"text": "Most video and digital still cameras can adjust for color temperature by zooming into a white or neutral colored object and setting the manual \"white balance\" (telling the camera that \"this object is white\"); the camera then shows true white as white and adjusts all the other colors accordingly. White-balancing is necessary especially when indoors under fluorescent lighting and when moving the camera from one lighting situation to another. Most cameras also have an automatic white balance function that attempts to determine the color of the light and correct accordingly. While these settings were once unreliable, they are much improved in today's digital cameras and produce an accurate white balance in a wide variety of lighting situations.",
"title": "Applications"
},
{
"paragraph_id": 25,
"text": "Video camera operators can white-balance objects that are not white, downplaying the color of the object used for white-balancing. For instance, they can bring more warmth into a picture by white-balancing off something that is light blue, such as faded blue denim; in this way white-balancing can replace a filter or lighting gel when those are not available.",
"title": "Applications"
},
{
"paragraph_id": 26,
"text": "Cinematographers do not \"white balance\" in the same way as video camera operators; they use techniques such as filters, choice of film stock, pre-flashing, and, after shooting, color grading, both by exposure at the labs and also digitally. Cinematographers also work closely with set designers and lighting crews to achieve the desired color effects.",
"title": "Applications"
},
{
"paragraph_id": 27,
"text": "For artists, most pigments and papers have a cool or warm cast, as the human eye can detect even a minute amount of saturation. Gray mixed with yellow, orange, or red is a \"warm gray\". Green, blue, or purple create \"cool grays\". Note that this sense of temperature is the reverse of that of real temperature; bluer is described as \"cooler\" even though it corresponds to a higher-temperature black body.",
"title": "Applications"
},
{
"paragraph_id": 28,
"text": "Lighting designers sometimes select filters by color temperature, commonly to match light that is theoretically white. Since fixtures using discharge type lamps produce a light of a considerably higher color temperature than do tungsten lamps, using the two in conjunction could potentially produce a stark contrast, so sometimes fixtures with HID lamps, commonly producing light of 6000–7000 K, are fitted with 3200 K filters to emulate tungsten light. Fixtures with color mixing features or with multiple colors (if including 3200 K), are also capable of producing tungsten-like light. Color temperature may also be a factor when selecting lamps, since each is likely to have a different color temperature.",
"title": "Applications"
},
{
"paragraph_id": 29,
"text": "The CIE color rendering index (CRI) is a method to determine how well a light source's illumination of eight sample patches compares to the illumination provided by a reference source. Cited together, the CRI and CCT give a numerical estimate of what reference (ideal) light source best approximates a particular artificial light, and what the difference is.",
"title": "Color rendering index"
},
{
"paragraph_id": 30,
"text": "Light sources and illuminants may be characterized by their spectral power distribution (SPD). The relative SPD curves provided by many manufacturers may have been produced using 10 nm increments or more on their spectroradiometer. The result is what would seem to be a smoother (\"fuller spectrum\") power distribution than the lamp actually has. Owing to their spiky distribution, much finer increments are advisable for taking measurements of fluorescent lights, and this requires more expensive equipment.",
"title": "Spectral power distribution"
},
{
"paragraph_id": 31,
"text": "In astronomy, the color temperature is defined by the local slope of the SPD at a given wavelength, or, in practice, a wavelength range. Given, for example, the color magnitudes B and V which are calibrated to be equal for an A0V star (e.g. Vega), the stellar color temperature T C {\\displaystyle T_{C}} is given by the temperature for which the color index B − V {\\displaystyle B-V} of a black-body radiator fits the stellar one. Besides the B − V {\\displaystyle B-V} , other color indices can be used as well. The color temperature (as well as the correlated color temperature defined above) may differ largely from the effective temperature given by the radiative flux of the stellar surface. For example, the color temperature of an A0V star is about 15000 K compared to an effective temperature of about 9500 K.",
"title": "Color temperature in astronomy"
},
{
"paragraph_id": 32,
"text": "For most applications in astronomy (e.g., to place a star on the HR diagram or to determine the temperature of a model flux fitting an observed spectrum) the effective temperature is the quantity of interest. Various color-effective temperature relations exist in the literature. There relations also have smaller dependencies on other stellar parameters, such as the stellar metallicity and surface gravity",
"title": "Color temperature in astronomy"
}
] | Color temperature is a parameter describing the color of a visible light source by comparing it to the color of light emitted by an idealized opaque, non-reflective body. The temperature of the ideal emitter that matches the color most closely is defined as the color temperature of the original visible light source. Color temperature is usually measured in kelvins. The color temperature scale describes only the color of light emitted by a light source, which may actually be at a different temperature. Color temperature has applications in lighting, photography, videography, publishing, manufacturing, astrophysics and other fields. In practice, color temperature is most meaningful for light sources that correspond somewhat closely to the color of some black body, i.e., light in a range going from red to orange to yellow to white to bluish white. Although the concept of correlated color temperature extends the definition to any visible light, the color temperature of a green or a purple light rarely is useful information. Color temperature is conventionally expressed in kelvins, using the symbol K, a unit for absolute temperature. Color temperatures over 5000 K are called "cool colors" (bluish), while lower color temperatures (2700–3000 K) are called "warm colors" (yellowish). "Warm" in this context is with respect to a traditional categorization of colors, not a reference to black body temperature. The hue-heat hypothesis states that low color temperatures will feel warmer while higher color temperatures will feel cooler. The spectral peak of warm-colored light is closer to infrared, and most natural warm-colored light sources emit significant infrared radiation. The fact that "warm" lighting in this sense actually has a "cooler" color temperature often leads to confusion. | 2001-11-18T21:52:32Z | 2023-12-31T15:35:19Z | [
"Template:Dead link",
"Template:Artificial light sources",
"Template:Color topics",
"Template:Use mdy dates",
"Template:Unreferenced section",
"Template:Excerpt",
"Template:Cite web",
"Template:Anchor",
"Template:Reflist",
"Template:Cite book",
"Template:Short description",
"Template:More citations needed",
"Template:Sub",
"Template:Main",
"Template:Cite journal",
"Template:Photography subject",
"Template:Authority control",
"Template:Use American English",
"Template:Stack",
"Template:Citation needed",
"Template:Webarchive"
] | https://en.wikipedia.org/wiki/Color_temperature |
7,165 | Cartoon | A cartoon is a type of visual art that is typically drawn, frequently animated, in an unrealistic or semi-realistic style. The specific meaning has evolved, but the modern usage usually refers to either: an image or series of images intended for satire, caricature, or humor; or a motion picture that relies on a sequence of illustrations for its animation. Someone who creates cartoons in the first sense is called a cartoonist, and in the second sense they are usually called an animator.
The concept originated in the Middle Ages, and first described a preparatory drawing for a piece of art, such as a painting, fresco, tapestry, or stained glass window. In the 19th century, beginning in Punch magazine in 1843, cartoon came to refer – ironically at first – to humorous artworks in magazines and newspapers. Then it also was used for political cartoons and comic strips. When the medium developed, in the early 20th century, it began to refer to animated films that resembled print cartoons.
A cartoon (from Italian: cartone and Dutch: karton—words describing strong, heavy paper or pasteboard) is a full-size drawing made on sturdy paper as a design or modello for a painting, stained glass, or tapestry. Cartoons were typically used in the production of frescoes, to accurately link the component parts of the composition when painted on damp plaster over a series of days (giornate). In media such as stained tapestry or stained glass, the cartoon was handed over by the artist to the skilled craftsmen who produced the final work.
Such cartoons often have pinpricks along the outlines of the design so that a bag of soot patted or "pounced" over a cartoon, held against the wall, would leave black dots on the plaster ("pouncing"). Cartoons by painters, such as the Raphael Cartoons in London, and examples by Leonardo da Vinci, are highly prized in their own right. Tapestry cartoons, usually colored, could be placed behind the loom, where the weaver would replicate the design. As tapestries are worked from behind, a mirror could be placed behind the loom to allow the weaver to see their work; in such cases the cartoon was placed behind the weaver.
In print media, a cartoon is a drawing or series of drawings, usually humorous in intent. This usage dates from 1843, when Punch magazine applied the term to satirical drawings in its pages, particularly sketches by John Leech. The first of these parodied the preparatory cartoons for grand historical frescoes in the then-new Palace of Westminster in London.
Sir John Tenniel—illustrator of Alice's Adventures in Wonderland—joined Punch in 1850, and over 50 years contributed over two thousand cartoons.
Cartoons can be divided into gag cartoons, which include editorial cartoons, and comic strips.
Modern single-panel gag cartoons, found in magazines, generally consist of a single drawing with a typeset caption positioned beneath, or, less often, a speech balloon. Newspaper syndicates have also distributed single-panel gag cartoons by Mel Calman, Bill Holman, Gary Larson, George Lichty, Fred Neher and others. Many consider New Yorker cartoonist Peter Arno the father of the modern gag cartoon (as did Arno himself). The roster of magazine gag cartoonists includes Charles Addams, Charles Barsotti, and Chon Day.
Bill Hoest, Jerry Marcus, and Virgil Partch began as magazine gag cartoonists and moved to syndicated comic strips. Richard Thompson illustrated numerous feature articles in The Washington Post before creating his Cul de Sac comic strip. The sports section of newspapers usually featured cartoons, sometimes including syndicated features such as Chester "Chet" Brown's All in Sport.
Editorial cartoons are found almost exclusively in news publications and news websites. Although they also employ humor, they are more serious in tone, commonly using irony or satire. The art usually acts as a visual metaphor to illustrate a point of view on current social or political topics. Editorial cartoons often include speech balloons and sometimes use multiple panels. Editorial cartoonists of note include Herblock, David Low, Jeff MacNelly, Mike Peters, and Gerald Scarfe.
Comic strips, also known as cartoon strips in the United Kingdom, are found daily in newspapers worldwide, and are usually a short series of cartoon illustrations in sequence. In the United States, they are not commonly called "cartoons" themselves, but rather "comics" or "funnies". Nonetheless, the creators of comic strips—as well as comic books and graphic novels—are usually referred to as "cartoonists". Although humor is the most prevalent subject matter, adventure and drama are also represented in this medium. Some noteworthy cartoonists of humorous comic strips are Scott Adams, Charles Schulz, E. C. Segar, Mort Walker and Bill Watterson.
Political cartoons are like illustrated editorials that serve visual commentaries on political events. They offer subtle criticism which are cleverly quoted with humour and satire to the extent that the criticized does not get embittered.
The pictorial satire of William Hogarth is regarded as a precursor to the development of political cartoons in 18th century England. George Townshend produced some of the first overtly political cartoons and caricatures in the 1750s. The medium began to develop in the latter part of the 18th century under the direction of its great exponents, James Gillray and Thomas Rowlandson, both from London. Gillray explored the use of the medium for lampooning and caricature, and has been referred to as the father of the political cartoon. By calling the king, prime ministers and generals to account for their behaviour, many of Gillray's satires were directed against George III, depicting him as a pretentious buffoon, while the bulk of his work was dedicated to ridiculing the ambitions of revolutionary France and Napoleon. George Cruikshank became the leading cartoonist in the period following Gillray, from 1815 until the 1840s. His career was renowned for his social caricatures of English life for popular publications.
By the mid 19th century, major political newspapers in many other countries featured cartoons commenting on the politics of the day. Thomas Nast, in New York City, showed how realistic German drawing techniques could redefine American cartooning. His 160 cartoons relentlessly pursued the criminal characteristic of the Tweed machine in New York City, and helped bring it down. Indeed, Tweed was arrested in Spain when police identified him from Nast's cartoons. In Britain, Sir John Tenniel was the toast of London. In France under the July Monarchy, Honoré Daumier took up the new genre of political and social caricature, most famously lampooning the rotund King Louis Philippe.
Political cartoons can be humorous or satirical, sometimes with piercing effect. The target of the humor may complain, but can seldom fight back. Lawsuits have been very rare; the first successful lawsuit against a cartoonist in over a century in Britain came in 1921, when J. H. Thomas, the leader of the National Union of Railwaymen (NUR), initiated libel proceedings against the magazine of the British Communist Party. Thomas claimed defamation in the form of cartoons and words depicting the events of "Black Friday", when he allegedly betrayed the locked-out Miners' Federation. To Thomas, the framing of his image by the far left threatened to grievously degrade his character in the popular imagination. Soviet-inspired communism was a new element in European politics, and cartoonists unrestrained by tradition tested the boundaries of libel law. Thomas won the lawsuit and restored his reputation.
Cartoons such as xkcd have also found their place in the world of science, mathematics, and technology. For example, the cartoon Wonderlab looked at daily life in the chemistry lab. In the U.S., one well-known cartoonist for these fields is Sidney Harris. Many of Gary Larson's cartoons have a scientific flavor.
Books with cartoons are usually magazine-format "comic books", or occasionally reprints of newspaper cartoons.
In Britain in the 1930s adventure magazines became quite popular, especially those published by DC Thomson; the publisher sent observers around the country to talk to boys and learn what they wanted to read about. The story line in magazines, comic books and cinema that most appealed to boys was the glamorous heroism of British soldiers fighting wars that were exciting and just. DC Thomson issued the first The Dandy Comic in December 1937. It had a revolutionary design that broke away from the usual children's comics that were published broadsheet in size and not very colourful. Thomson capitalized on its success with a similar product The Beano in 1938.
On some occasions, new gag cartoons have been created for book publication, as was the case with Think Small, a 1967 promotional book distributed as a giveaway by Volkswagen dealers. Bill Hoest and other cartoonists of that decade drew cartoons showing Volkswagens, and these were published along with humorous automotive essays by such humorists as H. Allen Smith, Roger Price and Jean Shepherd. The book's design juxtaposed each cartoon alongside a photograph of the cartoon's creator.
Because of the stylistic similarities between comic strips and early animated films, cartoon came to refer to animation, and the word cartoon is currently used in reference to both animated cartoons and gag cartoons. While animation designates any style of illustrated images seen in rapid succession to give the impression of movement, the word "cartoon" is most often used as a descriptor for television programs and short films aimed at children, possibly featuring anthropomorphized animals, superheroes, the adventures of child protagonists or related themes.
In the 1980s, cartoon was shortened to toon, referring to characters in animated productions. This term was popularized in 1988 by the combined live-action/animated film Who Framed Roger Rabbit, followed in 1990 by the animated TV series Tiny Toon Adventures. | [
{
"paragraph_id": 0,
"text": "A cartoon is a type of visual art that is typically drawn, frequently animated, in an unrealistic or semi-realistic style. The specific meaning has evolved, but the modern usage usually refers to either: an image or series of images intended for satire, caricature, or humor; or a motion picture that relies on a sequence of illustrations for its animation. Someone who creates cartoons in the first sense is called a cartoonist, and in the second sense they are usually called an animator.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The concept originated in the Middle Ages, and first described a preparatory drawing for a piece of art, such as a painting, fresco, tapestry, or stained glass window. In the 19th century, beginning in Punch magazine in 1843, cartoon came to refer – ironically at first – to humorous artworks in magazines and newspapers. Then it also was used for political cartoons and comic strips. When the medium developed, in the early 20th century, it began to refer to animated films that resembled print cartoons.",
"title": ""
},
{
"paragraph_id": 2,
"text": "A cartoon (from Italian: cartone and Dutch: karton—words describing strong, heavy paper or pasteboard) is a full-size drawing made on sturdy paper as a design or modello for a painting, stained glass, or tapestry. Cartoons were typically used in the production of frescoes, to accurately link the component parts of the composition when painted on damp plaster over a series of days (giornate). In media such as stained tapestry or stained glass, the cartoon was handed over by the artist to the skilled craftsmen who produced the final work.",
"title": "Fine art"
},
{
"paragraph_id": 3,
"text": "Such cartoons often have pinpricks along the outlines of the design so that a bag of soot patted or \"pounced\" over a cartoon, held against the wall, would leave black dots on the plaster (\"pouncing\"). Cartoons by painters, such as the Raphael Cartoons in London, and examples by Leonardo da Vinci, are highly prized in their own right. Tapestry cartoons, usually colored, could be placed behind the loom, where the weaver would replicate the design. As tapestries are worked from behind, a mirror could be placed behind the loom to allow the weaver to see their work; in such cases the cartoon was placed behind the weaver.",
"title": "Fine art"
},
{
"paragraph_id": 4,
"text": "In print media, a cartoon is a drawing or series of drawings, usually humorous in intent. This usage dates from 1843, when Punch magazine applied the term to satirical drawings in its pages, particularly sketches by John Leech. The first of these parodied the preparatory cartoons for grand historical frescoes in the then-new Palace of Westminster in London.",
"title": "Mass media"
},
{
"paragraph_id": 5,
"text": "Sir John Tenniel—illustrator of Alice's Adventures in Wonderland—joined Punch in 1850, and over 50 years contributed over two thousand cartoons.",
"title": "Mass media"
},
{
"paragraph_id": 6,
"text": "Cartoons can be divided into gag cartoons, which include editorial cartoons, and comic strips.",
"title": "Mass media"
},
{
"paragraph_id": 7,
"text": "Modern single-panel gag cartoons, found in magazines, generally consist of a single drawing with a typeset caption positioned beneath, or, less often, a speech balloon. Newspaper syndicates have also distributed single-panel gag cartoons by Mel Calman, Bill Holman, Gary Larson, George Lichty, Fred Neher and others. Many consider New Yorker cartoonist Peter Arno the father of the modern gag cartoon (as did Arno himself). The roster of magazine gag cartoonists includes Charles Addams, Charles Barsotti, and Chon Day.",
"title": "Mass media"
},
{
"paragraph_id": 8,
"text": "Bill Hoest, Jerry Marcus, and Virgil Partch began as magazine gag cartoonists and moved to syndicated comic strips. Richard Thompson illustrated numerous feature articles in The Washington Post before creating his Cul de Sac comic strip. The sports section of newspapers usually featured cartoons, sometimes including syndicated features such as Chester \"Chet\" Brown's All in Sport.",
"title": "Mass media"
},
{
"paragraph_id": 9,
"text": "Editorial cartoons are found almost exclusively in news publications and news websites. Although they also employ humor, they are more serious in tone, commonly using irony or satire. The art usually acts as a visual metaphor to illustrate a point of view on current social or political topics. Editorial cartoons often include speech balloons and sometimes use multiple panels. Editorial cartoonists of note include Herblock, David Low, Jeff MacNelly, Mike Peters, and Gerald Scarfe.",
"title": "Mass media"
},
{
"paragraph_id": 10,
"text": "Comic strips, also known as cartoon strips in the United Kingdom, are found daily in newspapers worldwide, and are usually a short series of cartoon illustrations in sequence. In the United States, they are not commonly called \"cartoons\" themselves, but rather \"comics\" or \"funnies\". Nonetheless, the creators of comic strips—as well as comic books and graphic novels—are usually referred to as \"cartoonists\". Although humor is the most prevalent subject matter, adventure and drama are also represented in this medium. Some noteworthy cartoonists of humorous comic strips are Scott Adams, Charles Schulz, E. C. Segar, Mort Walker and Bill Watterson.",
"title": "Mass media"
},
{
"paragraph_id": 11,
"text": "Political cartoons are like illustrated editorials that serve visual commentaries on political events. They offer subtle criticism which are cleverly quoted with humour and satire to the extent that the criticized does not get embittered.",
"title": "Mass media"
},
{
"paragraph_id": 12,
"text": "The pictorial satire of William Hogarth is regarded as a precursor to the development of political cartoons in 18th century England. George Townshend produced some of the first overtly political cartoons and caricatures in the 1750s. The medium began to develop in the latter part of the 18th century under the direction of its great exponents, James Gillray and Thomas Rowlandson, both from London. Gillray explored the use of the medium for lampooning and caricature, and has been referred to as the father of the political cartoon. By calling the king, prime ministers and generals to account for their behaviour, many of Gillray's satires were directed against George III, depicting him as a pretentious buffoon, while the bulk of his work was dedicated to ridiculing the ambitions of revolutionary France and Napoleon. George Cruikshank became the leading cartoonist in the period following Gillray, from 1815 until the 1840s. His career was renowned for his social caricatures of English life for popular publications.",
"title": "Mass media"
},
{
"paragraph_id": 13,
"text": "By the mid 19th century, major political newspapers in many other countries featured cartoons commenting on the politics of the day. Thomas Nast, in New York City, showed how realistic German drawing techniques could redefine American cartooning. His 160 cartoons relentlessly pursued the criminal characteristic of the Tweed machine in New York City, and helped bring it down. Indeed, Tweed was arrested in Spain when police identified him from Nast's cartoons. In Britain, Sir John Tenniel was the toast of London. In France under the July Monarchy, Honoré Daumier took up the new genre of political and social caricature, most famously lampooning the rotund King Louis Philippe.",
"title": "Mass media"
},
{
"paragraph_id": 14,
"text": "Political cartoons can be humorous or satirical, sometimes with piercing effect. The target of the humor may complain, but can seldom fight back. Lawsuits have been very rare; the first successful lawsuit against a cartoonist in over a century in Britain came in 1921, when J. H. Thomas, the leader of the National Union of Railwaymen (NUR), initiated libel proceedings against the magazine of the British Communist Party. Thomas claimed defamation in the form of cartoons and words depicting the events of \"Black Friday\", when he allegedly betrayed the locked-out Miners' Federation. To Thomas, the framing of his image by the far left threatened to grievously degrade his character in the popular imagination. Soviet-inspired communism was a new element in European politics, and cartoonists unrestrained by tradition tested the boundaries of libel law. Thomas won the lawsuit and restored his reputation.",
"title": "Mass media"
},
{
"paragraph_id": 15,
"text": "Cartoons such as xkcd have also found their place in the world of science, mathematics, and technology. For example, the cartoon Wonderlab looked at daily life in the chemistry lab. In the U.S., one well-known cartoonist for these fields is Sidney Harris. Many of Gary Larson's cartoons have a scientific flavor.",
"title": "Mass media"
},
{
"paragraph_id": 16,
"text": "Books with cartoons are usually magazine-format \"comic books\", or occasionally reprints of newspaper cartoons.",
"title": "Mass media"
},
{
"paragraph_id": 17,
"text": "In Britain in the 1930s adventure magazines became quite popular, especially those published by DC Thomson; the publisher sent observers around the country to talk to boys and learn what they wanted to read about. The story line in magazines, comic books and cinema that most appealed to boys was the glamorous heroism of British soldiers fighting wars that were exciting and just. DC Thomson issued the first The Dandy Comic in December 1937. It had a revolutionary design that broke away from the usual children's comics that were published broadsheet in size and not very colourful. Thomson capitalized on its success with a similar product The Beano in 1938.",
"title": "Mass media"
},
{
"paragraph_id": 18,
"text": "On some occasions, new gag cartoons have been created for book publication, as was the case with Think Small, a 1967 promotional book distributed as a giveaway by Volkswagen dealers. Bill Hoest and other cartoonists of that decade drew cartoons showing Volkswagens, and these were published along with humorous automotive essays by such humorists as H. Allen Smith, Roger Price and Jean Shepherd. The book's design juxtaposed each cartoon alongside a photograph of the cartoon's creator.",
"title": "Mass media"
},
{
"paragraph_id": 19,
"text": "Because of the stylistic similarities between comic strips and early animated films, cartoon came to refer to animation, and the word cartoon is currently used in reference to both animated cartoons and gag cartoons. While animation designates any style of illustrated images seen in rapid succession to give the impression of movement, the word \"cartoon\" is most often used as a descriptor for television programs and short films aimed at children, possibly featuring anthropomorphized animals, superheroes, the adventures of child protagonists or related themes.",
"title": "Animation"
},
{
"paragraph_id": 20,
"text": "In the 1980s, cartoon was shortened to toon, referring to characters in animated productions. This term was popularized in 1988 by the combined live-action/animated film Who Framed Roger Rabbit, followed in 1990 by the animated TV series Tiny Toon Adventures.",
"title": "Animation"
}
] | A cartoon is a type of visual art that is typically drawn, frequently animated, in an unrealistic or semi-realistic style. The specific meaning has evolved, but the modern usage usually refers to either: an image or series of images intended for satire, caricature, or humor; or a motion picture that relies on a sequence of illustrations for its animation. Someone who creates cartoons in the first sense is called a cartoonist, and in the second sense they are usually called an animator. The concept originated in the Middle Ages, and first described a preparatory drawing for a piece of art, such as a painting, fresco, tapestry, or stained glass window. In the 19th century, beginning in Punch magazine in 1843, cartoon came to refer – ironically at first – to humorous artworks in magazines and newspapers. Then it also was used for political cartoons and comic strips. When the medium developed, in the early 20th century, it began to refer to animated films that resembled print cartoons. | 2001-11-18T23:06:15Z | 2023-12-19T08:25:03Z | [
"Template:Pp",
"Template:Comics navbar",
"Template:Sfn",
"Template:Cite book",
"Template:Refbegin",
"Template:Cite EB1911",
"Template:Short description",
"Template:Lang-nl",
"Template:Portal",
"Template:Cite news",
"Template:Refend",
"Template:Commons category",
"Template:Wiktionary",
"Template:Other uses",
"Template:\" '",
"Template:Reflist",
"Template:Harvnb",
"Template:Cite web",
"Template:Cite magazine",
"Template:Authority control",
"Template:Sprotect",
"Template:Lang-it",
"Template:Main",
"Template:More"
] | https://en.wikipedia.org/wiki/Cartoon |
7,167 | Chief Minister of the Northern Territory | The chief minister of the Northern Territory is the head of government of the Northern Territory. The office is the equivalent of a state premier. When the Northern Territory Legislative Assembly was created in 1974, the head of government was officially known as majority leader. This title was used in the first parliament (1974–1977) and the first eighteen months of the second. When self-government was granted the Northern Territory in 1978, the title of the head of government became chief minister.
The chief minister is formally appointed by the administrator, who in normal circumstances will appoint the head of whichever party holds the majority of seats in the unicameral Legislative Assembly. In times of constitutional crisis, the administrator can appoint someone else as chief minister, though this has never occurred.
Since 21 December 2023, following the resignation of Natasha Fyles, the chief minister is Eva Lawler of the Labor Party. She is the third female chief minister of the Northern Territory.
The Country Liberal Party won the first Northern Territory election on 19 October 1974 and elected Goff Letts majority leader. He headed an Executive that carried out most of the functions of a ministry at the state level. At the 1977 election Letts lost his seat and party leadership. He was succeeded on 13 August 1977 by Paul Everingham (CLP) as Majority Leader. When the Territory attained self-government on 1 July 1978, Everingham became chief minister with greatly expanded powers.
In 2001, Clare Martin became the first Labor and female chief minister of the Northern Territory. Until 2004 the conduct of elections and drawing of electoral boundaries was performed by the Northern Territory Electoral Office, a unit of the Department of the chief minister. In March 2004 the independent Northern Territory Electoral Commission was established.
In 2013, Mills was replaced as chief minister and CLP leader by Adam Giles at the 2013 CLP leadership ballot on 13 March to become the first indigenous Australian to lead a state or territory government in Australia.
Following the 2016 election landslide outcome, Labor's Michael Gunner became chief minister; he was the first Chief Minister who was born in the Northern Territory. On 10 May 2022, Gunner announced his intention to resign. On 13 May 2022, Natasha Fyles was elected to the position by the Labor caucus. On 19 December 2023, Fyles resigned following controversy over undeclared shares in mining company South32. On21 December 2023, Eva Lawler replaced Fyles by a unanimous decision of the Labor caucus.
From the foundation of the Northern Territory Legislative Assembly in 1974 until the granting of self-government in 1978, the head of government was known as the majority leader:
From 1978, the position was known as the chief minister: | [
{
"paragraph_id": 0,
"text": "The chief minister of the Northern Territory is the head of government of the Northern Territory. The office is the equivalent of a state premier. When the Northern Territory Legislative Assembly was created in 1974, the head of government was officially known as majority leader. This title was used in the first parliament (1974–1977) and the first eighteen months of the second. When self-government was granted the Northern Territory in 1978, the title of the head of government became chief minister.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The chief minister is formally appointed by the administrator, who in normal circumstances will appoint the head of whichever party holds the majority of seats in the unicameral Legislative Assembly. In times of constitutional crisis, the administrator can appoint someone else as chief minister, though this has never occurred.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Since 21 December 2023, following the resignation of Natasha Fyles, the chief minister is Eva Lawler of the Labor Party. She is the third female chief minister of the Northern Territory.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The Country Liberal Party won the first Northern Territory election on 19 October 1974 and elected Goff Letts majority leader. He headed an Executive that carried out most of the functions of a ministry at the state level. At the 1977 election Letts lost his seat and party leadership. He was succeeded on 13 August 1977 by Paul Everingham (CLP) as Majority Leader. When the Territory attained self-government on 1 July 1978, Everingham became chief minister with greatly expanded powers.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "In 2001, Clare Martin became the first Labor and female chief minister of the Northern Territory. Until 2004 the conduct of elections and drawing of electoral boundaries was performed by the Northern Territory Electoral Office, a unit of the Department of the chief minister. In March 2004 the independent Northern Territory Electoral Commission was established.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "In 2013, Mills was replaced as chief minister and CLP leader by Adam Giles at the 2013 CLP leadership ballot on 13 March to become the first indigenous Australian to lead a state or territory government in Australia.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "Following the 2016 election landslide outcome, Labor's Michael Gunner became chief minister; he was the first Chief Minister who was born in the Northern Territory. On 10 May 2022, Gunner announced his intention to resign. On 13 May 2022, Natasha Fyles was elected to the position by the Labor caucus. On 19 December 2023, Fyles resigned following controversy over undeclared shares in mining company South32. On21 December 2023, Eva Lawler replaced Fyles by a unanimous decision of the Labor caucus.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "From the foundation of the Northern Territory Legislative Assembly in 1974 until the granting of self-government in 1978, the head of government was known as the majority leader:",
"title": "List of chief ministers of the Northern Territory"
},
{
"paragraph_id": 8,
"text": "From 1978, the position was known as the chief minister:",
"title": "List of chief ministers of the Northern Territory"
}
] | The chief minister of the Northern Territory is the head of government of the Northern Territory. The office is the equivalent of a state premier. When the Northern Territory Legislative Assembly was created in 1974, the head of government was officially known as majority leader. This title was used in the first parliament (1974–1977) and the first eighteen months of the second. When self-government was granted the Northern Territory in 1978, the title of the head of government became chief minister. The chief minister is formally appointed by the administrator, who in normal circumstances will appoint the head of whichever party holds the majority of seats in the unicameral Legislative Assembly. In times of constitutional crisis, the administrator can appoint someone else as chief minister, though this has never occurred. Since 21 December 2023, following the resignation of Natasha Fyles, the chief minister is Eva Lawler of the Labor Party. She is the third female chief minister of the Northern Territory. | 2001-11-19T03:38:46Z | 2023-12-30T03:50:29Z | [
"Template:Short description",
"Template:Northern Territory ministerial portfolios",
"Template:Government of the Northern Territory",
"Template:Infobox official post",
"Template:Start date",
"Template:Enddate",
"Template:Cite news",
"Template:Australian premiers",
"Template:Politics of the Northern Territory",
"Template:Use dmy dates",
"Template:Use Australian English",
"Template:Age in years and days",
"Template:Reflist",
"Template:Abbr"
] | https://en.wikipedia.org/wiki/Chief_Minister_of_the_Northern_Territory |
7,172 | Chemotherapy | Chemotherapy (often abbreviated to chemo and sometimes CTX or CTx) is a type of cancer treatment that uses one or more anti-cancer drugs (chemotherapeutic agents or alkylating agents) as part of a standardized chemotherapy regimen. Chemotherapy may be given with a curative intent (which almost always involves combinations of drugs) or it may aim to prolong life or to reduce symptoms (palliative chemotherapy). Chemotherapy is one of the major categories of the medical discipline specifically devoted to pharmacotherapy for cancer, which is called medical oncology.
The term chemotherapy has come to connote non-specific usage of intracellular poisons to inhibit mitosis (cell division) or induce DNA damage, which is why inhibition of DNA repair can augment chemotherapy. The connotation of the word chemotherapy excludes more selective agents that block extracellular signals (signal transduction). The development of therapies with specific molecular or genetic targets, which inhibit growth-promoting signals from classic endocrine hormones (primarily estrogens for breast cancer and androgens for prostate cancer) are now called hormonal therapies. By contrast, other inhibitions of growth-signals like those associated with receptor tyrosine kinases are referred to as targeted therapy.
Importantly, the use of drugs (whether chemotherapy, hormonal therapy or targeted therapy) constitutes systemic therapy for cancer in that they are introduced into the blood stream and are therefore in principle able to address cancer at any anatomic location in the body. Systemic therapy is often used in conjunction with other modalities that constitute local therapy (i.e., treatments whose efficacy is confined to the anatomic area where they are applied) for cancer such as radiation therapy, surgery or hyperthermia therapy.
Traditional chemotherapeutic agents are cytotoxic by means of interfering with cell division (mitosis) but cancer cells vary widely in their susceptibility to these agents. To a large extent, chemotherapy can be thought of as a way to damage or stress cells, which may then lead to cell death if apoptosis is initiated. Many of the side effects of chemotherapy can be traced to damage to normal cells that divide rapidly and are thus sensitive to anti-mitotic drugs: cells in the bone marrow, digestive tract and hair follicles. This results in the most common side-effects of chemotherapy: myelosuppression (decreased production of blood cells, hence also immunosuppression), mucositis (inflammation of the lining of the digestive tract), and alopecia (hair loss). Because of the effect on immune cells (especially lymphocytes), chemotherapy drugs often find use in a host of diseases that result from harmful overactivity of the immune system against self (so-called autoimmunity). These include rheumatoid arthritis, systemic lupus erythematosus, multiple sclerosis, vasculitis and many others.
There are a number of strategies in the administration of chemotherapeutic drugs used today. Chemotherapy may be given with a curative intent or it may aim to prolong life or to palliate symptoms.
All chemotherapy regimens require that the recipient be capable of undergoing the treatment. Performance status is often used as a measure to determine whether a person can receive chemotherapy, or whether dose reduction is required. Because only a fraction of the cells in a tumor die with each treatment (fractional kill), repeated doses must be administered to continue to reduce the size of the tumor. Current chemotherapy regimens apply drug treatment in cycles, with the frequency and duration of treatments limited by toxicity.
The effectiveness of chemotherapy depends on the type of cancer and the stage. The overall effectiveness ranges from being curative for some cancers, such as some leukemias, to being ineffective, such as in some brain tumors, to being needless in others, like most non-melanoma skin cancers.
Dosage of chemotherapy can be difficult: If the dose is too low, it will be ineffective against the tumor, whereas, at excessive doses, the toxicity (side-effects) will be intolerable to the person receiving it. The standard method of determining chemotherapy dosage is based on calculated body surface area (BSA). The BSA is usually calculated with a mathematical formula or a nomogram, using the recipient's weight and height, rather than by direct measurement of body area. This formula was originally derived in a 1916 study and attempted to translate medicinal doses established with laboratory animals to equivalent doses for humans. The study only included nine human subjects. When chemotherapy was introduced in the 1950s, the BSA formula was adopted as the official standard for chemotherapy dosing for lack of a better option.
The validity of this method in calculating uniform doses has been questioned because the formula only takes into account the individual's weight and height. Drug absorption and clearance are influenced by multiple factors, including age, sex, metabolism, disease state, organ function, drug-to-drug interactions, genetics, and obesity, which have major impacts on the actual concentration of the drug in the person's bloodstream. As a result, there is high variability in the systemic chemotherapy drug concentration in people dosed by BSA, and this variability has been demonstrated to be more than ten-fold for many drugs. In other words, if two people receive the same dose of a given drug based on BSA, the concentration of that drug in the bloodstream of one person may be 10 times higher or lower compared to that of the other person. This variability is typical with many chemotherapy drugs dosed by BSA, and, as shown below, was demonstrated in a study of 14 common chemotherapy drugs.
The result of this pharmacokinetic variability among people is that many people do not receive the right dose to achieve optimal treatment effectiveness with minimized toxic side effects. Some people are overdosed while others are underdosed. For example, in a randomized clinical trial, investigators found 85% of metastatic colorectal cancer patients treated with 5-fluorouracil (5-FU) did not receive the optimal therapeutic dose when dosed by the BSA standard—68% were underdosed and 17% were overdosed.
There has been controversy over the use of BSA to calculate chemotherapy doses for people who are obese. Because of their higher BSA, clinicians often arbitrarily reduce the dose prescribed by the BSA formula for fear of overdosing. In many cases, this can result in sub-optimal treatment.
Several clinical studies have demonstrated that when chemotherapy dosing is individualized to achieve optimal systemic drug exposure, treatment outcomes are improved and toxic side effects are reduced. In the 5-FU clinical study cited above, people whose dose was adjusted to achieve a pre-determined target exposure realized an 84% improvement in treatment response rate and a six-month improvement in overall survival (OS) compared with those dosed by BSA.
In the same study, investigators compared the incidence of common 5-FU-associated grade 3/4 toxicities between the dose-adjusted people and people dosed per BSA. The incidence of debilitating grades of diarrhea was reduced from 18% in the BSA-dosed group to 4% in the dose-adjusted group and serious hematologic side effects were eliminated. Because of the reduced toxicity, dose-adjusted patients were able to be treated for longer periods of time. BSA-dosed people were treated for a total of 680 months while people in the dose-adjusted group were treated for a total of 791 months. Completing the course of treatment is an important factor in achieving better treatment outcomes.
Similar results were found in a study involving people with colorectal cancer who have been treated with the popular FOLFOX regimen. The incidence of serious diarrhea was reduced from 12% in the BSA-dosed group of patients to 1.7% in the dose-adjusted group, and the incidence of severe mucositis was reduced from 15% to 0.8%.
The FOLFOX study also demonstrated an improvement in treatment outcomes. Positive response increased from 46% in the BSA-dosed group to 70% in the dose-adjusted group. Median progression free survival (PFS) and overall survival (OS) both improved by six months in the dose adjusted group.
One approach that can help clinicians individualize chemotherapy dosing is to measure the drug levels in blood plasma over time and adjust dose according to a formula or algorithm to achieve optimal exposure. With an established target exposure for optimized treatment effectiveness with minimized toxicities, dosing can be personalized to achieve target exposure and optimal results for each person. Such an algorithm was used in the clinical trials cited above and resulted in significantly improved treatment outcomes.
Oncologists are already individualizing dosing of some cancer drugs based on exposure. Carboplatin and busulfan dosing rely upon results from blood tests to calculate the optimal dose for each person. Simple blood tests are also available for dose optimization of methotrexate, 5-FU, paclitaxel, and docetaxel.
The serum albumin level immediately prior to chemotherapy administration is an independent prognostic predictor of survival in various cancer types.
Alkylating agents are the oldest group of chemotherapeutics in use today. Originally derived from mustard gas used in World War I, there are now many types of alkylating agents in use. They are so named because of their ability to alkylate many molecules, including proteins, RNA and DNA. This ability to bind covalently to DNA via their alkyl group is the primary cause for their anti-cancer effects. DNA is made of two strands and the molecules may either bind twice to one strand of DNA (intrastrand crosslink) or may bind once to both strands (interstrand crosslink). If the cell tries to replicate crosslinked DNA during cell division, or tries to repair it, the DNA strands can break. This leads to a form of programmed cell death called apoptosis. Alkylating agents will work at any point in the cell cycle and thus are known as cell cycle-independent drugs. For this reason, the effect on the cell is dose dependent; the fraction of cells that die is directly proportional to the dose of drug.
The subtypes of alkylating agents are the nitrogen mustards, nitrosoureas, tetrazines, aziridines, cisplatins and derivatives, and non-classical alkylating agents. Nitrogen mustards include mechlorethamine, cyclophosphamide, melphalan, chlorambucil, ifosfamide and busulfan. Nitrosoureas include N-Nitroso-N-methylurea (MNU), carmustine (BCNU), lomustine (CCNU) and semustine (MeCCNU), fotemustine and streptozotocin. Tetrazines include dacarbazine, mitozolomide and temozolomide. Aziridines include thiotepa, mytomycin and diaziquone (AZQ). Cisplatin and derivatives include cisplatin, carboplatin and oxaliplatin. They impair cell function by forming covalent bonds with the amino, carboxyl, sulfhydryl, and phosphate groups in biologically important molecules. Non-classical alkylating agents include procarbazine and hexamethylmelamine.
Anti-metabolites are a group of molecules that impede DNA and RNA synthesis. Many of them have a similar structure to the building blocks of DNA and RNA. The building blocks are nucleotides; a molecule comprising a nucleobase, a sugar and a phosphate group. The nucleobases are divided into purines (guanine and adenine) and pyrimidines (cytosine, thymine and uracil). Anti-metabolites resemble either nucleobases or nucleosides (a nucleotide without the phosphate group), but have altered chemical groups. These drugs exert their effect by either blocking the enzymes required for DNA synthesis or becoming incorporated into DNA or RNA. By inhibiting the enzymes involved in DNA synthesis, they prevent mitosis because the DNA cannot duplicate itself. Also, after misincorporation of the molecules into DNA, DNA damage can occur and programmed cell death (apoptosis) is induced. Unlike alkylating agents, anti-metabolites are cell cycle dependent. This means that they only work during a specific part of the cell cycle, in this case S-phase (the DNA synthesis phase). For this reason, at a certain dose, the effect plateaus and proportionally no more cell death occurs with increased doses. Subtypes of the anti-metabolites are the anti-folates, fluoropyrimidines, deoxynucleoside analogues and thiopurines.
The anti-folates include methotrexate and pemetrexed. Methotrexate inhibits dihydrofolate reductase (DHFR), an enzyme that regenerates tetrahydrofolate from dihydrofolate. When the enzyme is inhibited by methotrexate, the cellular levels of folate coenzymes diminish. These are required for thymidylate and purine production, which are both essential for DNA synthesis and cell division. Pemetrexed is another anti-metabolite that affects purine and pyrimidine production, and therefore also inhibits DNA synthesis. It primarily inhibits the enzyme thymidylate synthase, but also has effects on DHFR, aminoimidazole carboxamide ribonucleotide formyltransferase and glycinamide ribonucleotide formyltransferase. The fluoropyrimidines include fluorouracil and capecitabine. Fluorouracil is a nucleobase analogue that is metabolised in cells to form at least two active products; 5-fluourouridine monophosphate (FUMP) and 5-fluoro-2'-deoxyuridine 5'-phosphate (fdUMP). FUMP becomes incorporated into RNA and fdUMP inhibits the enzyme thymidylate synthase; both of which lead to cell death. Capecitabine is a prodrug of 5-fluorouracil that is broken down in cells to produce the active drug. The deoxynucleoside analogues include cytarabine, gemcitabine, decitabine, azacitidine, fludarabine, nelarabine, cladribine, clofarabine, and pentostatin. The thiopurines include thioguanine and mercaptopurine.
Anti-microtubule agents are plant-derived chemicals that block cell division by preventing microtubule function. Microtubules are an important cellular structure composed of two proteins, α-tubulin and β-tubulin. They are hollow, rod-shaped structures that are required for cell division, among other cellular functions. Microtubules are dynamic structures, which means that they are permanently in a state of assembly and disassembly. Vinca alkaloids and taxanes are the two main groups of anti-microtubule agents, and although both of these groups of drugs cause microtubule dysfunction, their mechanisms of action are completely opposite: Vinca alkaloids prevent the assembly of microtubules, whereas taxanes prevent their disassembly. By doing so, they can induce mitotic catastrophe in the cancer cells. Following this, cell cycle arrest occurs, which induces programmed cell death (apoptosis). These drugs can also affect blood vessel growth, an essential process that tumours utilise in order to grow and metastasise.
Vinca alkaloids are derived from the Madagascar periwinkle, Catharanthus roseus, formerly known as Vinca rosea. They bind to specific sites on tubulin, inhibiting the assembly of tubulin into microtubules. The original vinca alkaloids are natural products that include vincristine and vinblastine. Following the success of these drugs, semi-synthetic vinca alkaloids were produced: vinorelbine (used in the treatment of non-small-cell lung cancer), vindesine, and vinflunine. These drugs are cell cycle-specific. They bind to the tubulin molecules in S-phase and prevent proper microtubule formation required for M-phase.
Taxanes are natural and semi-synthetic drugs. The first drug of their class, paclitaxel, was originally extracted from Taxus brevifolia, the Pacific yew. Now this drug and another in this class, docetaxel, are produced semi-synthetically from a chemical found in the bark of another yew tree, Taxus baccata.
Podophyllotoxin is an antineoplastic lignan obtained primarily from the American mayapple (Podophyllum peltatum) and Himalayan mayapple (Sinopodophyllum hexandrum). It has anti-microtubule activity, and its mechanism is similar to that of vinca alkaloids in that they bind to tubulin, inhibiting microtubule formation. Podophyllotoxin is used to produce two other drugs with different mechanisms of action: etoposide and teniposide.
Topoisomerase inhibitors are drugs that affect the activity of two enzymes: topoisomerase I and topoisomerase II. When the DNA double-strand helix is unwound, during DNA replication or transcription, for example, the adjacent unopened DNA winds tighter (supercoils), like opening the middle of a twisted rope. The stress caused by this effect is in part aided by the topoisomerase enzymes. They produce single- or double-strand breaks into DNA, reducing the tension in the DNA strand. This allows the normal unwinding of DNA to occur during replication or transcription. Inhibition of topoisomerase I or II interferes with both of these processes.
Two topoisomerase I inhibitors, irinotecan and topotecan, are semi-synthetically derived from camptothecin, which is obtained from the Chinese ornamental tree Camptotheca acuminata. Drugs that target topoisomerase II can be divided into two groups. The topoisomerase II poisons cause increased levels enzymes bound to DNA. This prevents DNA replication and transcription, causes DNA strand breaks, and leads to programmed cell death (apoptosis). These agents include etoposide, doxorubicin, mitoxantrone and teniposide. The second group, catalytic inhibitors, are drugs that block the activity of topoisomerase II, and therefore prevent DNA synthesis and translation because the DNA cannot unwind properly. This group includes novobiocin, merbarone, and aclarubicin, which also have other significant mechanisms of action.
The cytotoxic antibiotics are a varied group of drugs that have various mechanisms of action. The common theme that they share in their chemotherapy indication is that they interrupt cell division. The most important subgroup is the anthracyclines and the bleomycins; other prominent examples include mitomycin C and actinomycin.
Among the anthracyclines, doxorubicin and daunorubicin were the first, and were obtained from the bacterium Streptomyces peucetius. Derivatives of these compounds include epirubicin and idarubicin. Other clinically used drugs in the anthracycline group are pirarubicin, aclarubicin, and mitoxantrone. The mechanisms of anthracyclines include DNA intercalation (molecules insert between the two strands of DNA), generation of highly reactive free radicals that damage intercellular molecules and topoisomerase inhibition.
Actinomycin is a complex molecule that intercalates DNA and prevents RNA synthesis.
Bleomycin, a glycopeptide isolated from Streptomyces verticillus, also intercalates DNA, but produces free radicals that damage DNA. This occurs when bleomycin binds to a metal ion, becomes chemically reduced and reacts with oxygen.
Mitomycin is a cytotoxic antibiotic with the ability to alkylate DNA.
Most chemotherapy is delivered intravenously, although a number of agents can be administered orally (e.g., melphalan, busulfan, capecitabine). According to a recent (2016) systematic review, oral therapies present additional challenges for patients and care teams to maintain and support adherence to treatment plans.
There are many intravenous methods of drug delivery, known as vascular access devices. These include the winged infusion device, peripheral venous catheter, midline catheter, peripherally inserted central catheter (PICC), central venous catheter and implantable port. The devices have different applications regarding duration of chemotherapy treatment, method of delivery and types of chemotherapeutic agent.
Depending on the person, the cancer, the stage of cancer, the type of chemotherapy, and the dosage, intravenous chemotherapy may be given on either an inpatient or an outpatient basis. For continuous, frequent or prolonged intravenous chemotherapy administration, various systems may be surgically inserted into the vasculature to maintain access. Commonly used systems are the Hickman line, the Port-a-Cath, and the PICC line. These have a lower infection risk, are much less prone to phlebitis or extravasation, and eliminate the need for repeated insertion of peripheral cannulae.
Isolated limb perfusion (often used in melanoma), or isolated infusion of chemotherapy into the liver or the lung have been used to treat some tumors. The main purpose of these approaches is to deliver a very high dose of chemotherapy to tumor sites without causing overwhelming systemic damage. These approaches can help control solitary or limited metastases, but they are by definition not systemic, and, therefore, do not treat distributed metastases or micrometastases.
Topical chemotherapies, such as 5-fluorouracil, are used to treat some cases of non-melanoma skin cancer.
If the cancer has central nervous system involvement, or with meningeal disease, intrathecal chemotherapy may be administered.
Chemotherapeutic techniques have a range of side effects that depend on the type of medications used. The most common medications affect mainly the fast-dividing cells of the body, such as blood cells and the cells lining the mouth, stomach, and intestines. Chemotherapy-related toxicities can occur acutely after administration, within hours or days, or chronically, from weeks to years.
Virtually all chemotherapeutic regimens can cause depression of the immune system, often by paralysing the bone marrow and leading to a decrease of white blood cells, red blood cells, and platelets. Anemia and thrombocytopenia may require blood transfusion. Neutropenia (a decrease of the neutrophil granulocyte count below 0.5 x 10/litre) can be improved with synthetic G-CSF (granulocyte-colony-stimulating factor, e.g., filgrastim, lenograstim, efbemalenograstim alfa).
In very severe myelosuppression, which occurs in some regimens, almost all the bone marrow stem cells (cells that produce white and red blood cells) are destroyed, meaning allogenic or autologous bone marrow cell transplants are necessary. (In autologous BMTs, cells are removed from the person before the treatment, multiplied and then re-injected afterward; in allogenic BMTs, the source is a donor.) However, some people still develop diseases because of this interference with bone marrow.
Although people receiving chemotherapy are encouraged to wash their hands, avoid sick people, and take other infection-reducing steps, about 85% of infections are due to naturally occurring microorganisms in the person's own gastrointestinal tract (including oral cavity) and skin. This may manifest as systemic infections, such as sepsis, or as localized outbreaks, such as Herpes simplex, shingles, or other members of the Herpesviridea. The risk of illness and death can be reduced by taking common antibiotics such as quinolones or trimethoprim/sulfamethoxazole before any fever or sign of infection appears. Quinolones show effective prophylaxis mainly with hematological cancer. However, in general, for every five people who are immunosuppressed following chemotherapy who take an antibiotic, one fever can be prevented; for every 34 who take an antibiotic, one death can be prevented. Sometimes, chemotherapy treatments are postponed because the immune system is suppressed to a critically low level.
In Japan, the government has approved the use of some medicinal mushrooms like Trametes versicolor, to counteract depression of the immune system in people undergoing chemotherapy.
Trilaciclib is an inhibitor of cyclin-dependent kinase 4/6 approved for the prevention of myelosuppression caused by chemotherapy. The drug is given before chemotherapy to protect bone marrow function.
Due to immune system suppression, neutropenic enterocolitis (typhlitis) is a "life-threatening gastrointestinal complication of chemotherapy." Typhlitis is an intestinal infection which may manifest itself through symptoms including nausea, vomiting, diarrhea, a distended abdomen, fever, chills, or abdominal pain and tenderness.
Typhlitis is a medical emergency. It has a very poor prognosis and is often fatal unless promptly recognized and aggressively treated. Successful treatment hinges on early diagnosis provided by a high index of suspicion and the use of CT scanning, nonoperative treatment for uncomplicated cases, and sometimes elective right hemicolectomy to prevent recurrence.
Nausea, vomiting, anorexia, diarrhea, abdominal cramps, and constipation are common side-effects of chemotherapeutic medications that kill fast-dividing cells. Malnutrition and dehydration can result when the recipient does not eat or drink enough, or when the person vomits frequently, because of gastrointestinal damage. This can result in rapid weight loss, or occasionally in weight gain, if the person eats too much in an effort to allay nausea or heartburn. Weight gain can also be caused by some steroid medications. These side-effects can frequently be reduced or eliminated with antiemetic drugs. Low-certainty evidence also suggests that probiotics may have a preventative and treatment effect of diarrhoea related to chemotherapy alone and with radiotherapy. However, a high index of suspicion is appropriate, since diarrhoea and bloating are also symptoms of typhlitis, a very serious and potentially life-threatening medical emergency that requires immediate treatment.
Anemia can be a combined outcome caused by myelosuppressive chemotherapy, and possible cancer-related causes such as bleeding, blood cell destruction (hemolysis), hereditary disease, kidney dysfunction, nutritional deficiencies or anemia of chronic disease. Treatments to mitigate anemia include hormones to boost blood production (erythropoietin), iron supplements, and blood transfusions. Myelosuppressive therapy can cause a tendency to bleed easily, leading to anemia. Medications that kill rapidly dividing cells or blood cells can reduce the number of platelets in the blood, which can result in bruises and bleeding. Extremely low platelet counts may be temporarily boosted through platelet transfusions and new drugs to increase platelet counts during chemotherapy are being developed. Sometimes, chemotherapy treatments are postponed to allow platelet counts to recover.
Fatigue may be a consequence of the cancer or its treatment, and can last for months to years after treatment. One physiological cause of fatigue is anemia, which can be caused by chemotherapy, surgery, radiotherapy, primary and metastatic disease or nutritional depletion. Aerobic exercise has been found to be beneficial in reducing fatigue in people with solid tumours.
Nausea and vomiting are two of the most feared cancer treatment-related side-effects for people with cancer and their families. In 1983, Coates et al. found that people receiving chemotherapy ranked nausea and vomiting as the first and second most severe side-effects, respectively. Up to 20% of people receiving highly emetogenic agents in this era postponed, or even refused potentially curative treatments. Chemotherapy-induced nausea and vomiting (CINV) are common with many treatments and some forms of cancer. Since the 1990s, several novel classes of antiemetics have been developed and commercialized, becoming a nearly universal standard in chemotherapy regimens, and helping to successfully manage these symptoms in many people. Effective mediation of these unpleasant and sometimes debilitating symptoms results in increased quality of life for the recipient and more efficient treatment cycles, due to less stoppage of treatment due to better tolerance and better overall health.
Hair loss (alopecia) can be caused by chemotherapy that kills rapidly dividing cells; other medications may cause hair to thin. These are most often temporary effects: hair usually starts to regrow a few weeks after the last treatment, but sometimes with a change in color, texture, thickness or style. Sometimes hair has a tendency to curl after regrowth, resulting in "chemo curls." Severe hair loss occurs most often with drugs such as doxorubicin, daunorubicin, paclitaxel, docetaxel, cyclophosphamide, ifosfamide and etoposide. Permanent thinning or hair loss can result from some standard chemotherapy regimens.
Chemotherapy induced hair loss occurs by a non-androgenic mechanism, and can manifest as alopecia totalis, telogen effluvium, or less often alopecia areata. It is usually associated with systemic treatment due to the high mitotic rate of hair follicles, and more reversible than androgenic hair loss, although permanent cases can occur. Chemotherapy induces hair loss in women more often than men.
Scalp cooling offers a means of preventing both permanent and temporary hair loss; however, concerns about this method have been raised.
Development of secondary neoplasia after successful chemotherapy or radiotherapy treatment can occur. The most common secondary neoplasm is secondary acute myeloid leukemia, which develops primarily after treatment with alkylating agents or topoisomerase inhibitors. Survivors of childhood cancer are more than 13 times as likely to get a secondary neoplasm during the 30 years after treatment than the general population. Not all of this increase can be attributed to chemotherapy.
Some types of chemotherapy are gonadotoxic and may cause infertility. Chemotherapies with high risk include procarbazine and other alkylating drugs such as cyclophosphamide, ifosfamide, busulfan, melphalan, chlorambucil, and chlormethine. Drugs with medium risk include doxorubicin and platinum analogs such as cisplatin and carboplatin. On the other hand, therapies with low risk of gonadotoxicity include plant derivatives such as vincristine and vinblastine, antibiotics such as bleomycin and dactinomycin, and antimetabolites such as methotrexate, mercaptopurine, and 5-fluorouracil.
Female infertility by chemotherapy appears to be secondary to premature ovarian failure by loss of primordial follicles. This loss is not necessarily a direct effect of the chemotherapeutic agents, but could be due to an increased rate of growth initiation to replace damaged developing follicles.
People may choose between several methods of fertility preservation prior to chemotherapy, including cryopreservation of semen, ovarian tissue, oocytes, or embryos. As more than half of cancer patients are elderly, this adverse effect is only relevant for a minority of patients. A study in France between 1999 and 2011 came to the result that embryo freezing before administration of gonadotoxic agents to females caused a delay of treatment in 34% of cases, and a live birth in 27% of surviving cases who wanted to become pregnant, with the follow-up time varying between 1 and 13 years.
Potential protective or attenuating agents include GnRH analogs, where several studies have shown a protective effect in vivo in humans, but some studies show no such effect. Sphingosine-1-phosphate (S1P) has shown similar effect, but its mechanism of inhibiting the sphingomyelin apoptotic pathway may also interfere with the apoptosis action of chemotherapy drugs.
In chemotherapy as a conditioning regimen in hematopoietic stem cell transplantation, a study of people conditioned with cyclophosphamide alone for severe aplastic anemia came to the result that ovarian recovery occurred in all women younger than 26 years at time of transplantation, but only in five of 16 women older than 26 years.
Chemotherapy is teratogenic during pregnancy, especially during the first trimester, to the extent that abortion usually is recommended if pregnancy in this period is found during chemotherapy. Second- and third-trimester exposure does not usually increase the teratogenic risk and adverse effects on cognitive development, but it may increase the risk of various complications of pregnancy and fetal myelosuppression.
In males previously having undergone chemotherapy or radiotherapy, there appears to be no increase in genetic defects or congenital malformations in their children conceived after therapy. The use of assisted reproductive technologies and micromanipulation techniques might increase this risk. In females previously having undergone chemotherapy, miscarriage and congenital malformations are not increased in subsequent conceptions. However, when in vitro fertilization and embryo cryopreservation is practised between or shortly after treatment, possible genetic risks to the growing oocytes exist, and hence it has been recommended that the babies be screened.
Between 30 and 40 percent of people undergoing chemotherapy experience chemotherapy-induced peripheral neuropathy (CIPN), a progressive, enduring, and often irreversible condition, causing pain, tingling, numbness and sensitivity to cold, beginning in the hands and feet and sometimes progressing to the arms and legs. Chemotherapy drugs associated with CIPN include thalidomide, epothilones, vinca alkaloids, taxanes, proteasome inhibitors, and the platinum-based drugs. Whether CIPN arises, and to what degree, is determined by the choice of drug, duration of use, the total amount consumed and whether the person already has peripheral neuropathy. Though the symptoms are mainly sensory, in some cases motor nerves and the autonomic nervous system are affected. CIPN often follows the first chemotherapy dose and increases in severity as treatment continues, but this progression usually levels off at completion of treatment. The platinum-based drugs are the exception; with these drugs, sensation may continue to deteriorate for several months after the end of treatment. Some CIPN appears to be irreversible. Pain can often be managed with drug or other treatment but the numbness is usually resistant to treatment.
Some people receiving chemotherapy report fatigue or non-specific neurocognitive problems, such as an inability to concentrate; this is sometimes called post-chemotherapy cognitive impairment, referred to as "chemo brain" in popular and social media.
In particularly large tumors and cancers with high white cell counts, such as lymphomas, teratomas, and some leukemias, some people develop tumor lysis syndrome. The rapid breakdown of cancer cells causes the release of chemicals from the inside of the cells. Following this, high levels of uric acid, potassium and phosphate are found in the blood. High levels of phosphate induce secondary hypoparathyroidism, resulting in low levels of calcium in the blood. This causes kidney damage and the high levels of potassium can cause cardiac arrhythmia. Although prophylaxis is available and is often initiated in people with large tumors, this is a dangerous side-effect that can lead to death if left untreated.
Cardiotoxicity (heart damage) is especially prominent with the use of anthracycline drugs (doxorubicin, epirubicin, idarubicin, and liposomal doxorubicin). The cause of this is most likely due to the production of free radicals in the cell and subsequent DNA damage. Other chemotherapeutic agents that cause cardiotoxicity, but at a lower incidence, are cyclophosphamide, docetaxel and clofarabine.
Hepatotoxicity (liver damage) can be caused by many cytotoxic drugs. The susceptibility of an individual to liver damage can be altered by other factors such as the cancer itself, viral hepatitis, immunosuppression and nutritional deficiency. The liver damage can consist of damage to liver cells, hepatic sinusoidal syndrome (obstruction of the veins in the liver), cholestasis (where bile does not flow from the liver to the intestine) and liver fibrosis.
Nephrotoxicity (kidney damage) can be caused by tumor lysis syndrome and also due direct effects of drug clearance by the kidneys. Different drugs will affect different parts of the kidney and the toxicity may be asymptomatic (only seen on blood or urine tests) or may cause acute kidney injury.
Ototoxicity (damage to the inner ear) is a common side effect of platinum based drugs that can produce symptoms such as dizziness and vertigo. Children treated with platinum analogues have been found to be at risk for developing hearing loss.
Less common side-effects include red skin (erythema), dry skin, damaged fingernails, a dry mouth (xerostomia), water retention, and sexual impotence. Some medications can trigger allergic or pseudoallergic reactions.
Specific chemotherapeutic agents are associated with organ-specific toxicities, including cardiovascular disease (e.g., doxorubicin), interstitial lung disease (e.g., bleomycin) and occasionally secondary neoplasm (e.g., MOPP therapy for Hodgkin's disease).
Hand-foot syndrome is another side effect to cytotoxic chemotherapy.
Nutritional problems are also frequently seen in cancer patients at diagnosis and through chemotherapy treatment. Research suggests that in children and young people undergoing cancer treatment, parenteral nutrition may help with this leading to weight gain and increased calorie and protein intake, when compared to enteral nutrition.
Chemotherapy does not always work, and even when it is useful, it may not completely destroy the cancer. People frequently fail to understand its limitations. In one study of people who had been newly diagnosed with incurable, stage 4 cancer, more than two-thirds of people with lung cancer and more than four-fifths of people with colorectal cancer still believed that chemotherapy was likely to cure their cancer.
The blood–brain barrier poses an obstacle to delivery of chemotherapy to the brain. This is because the brain has an extensive system in place to protect it from harmful chemicals. Drug transporters can pump out drugs from the brain and brain's blood vessel cells into the cerebrospinal fluid and blood circulation. These transporters pump out most chemotherapy drugs, which reduces their efficacy for treatment of brain tumors. Only small lipophilic alkylating agents such as lomustine or temozolomide are able to cross this blood–brain barrier.
Blood vessels in tumors are very different from those seen in normal tissues. As a tumor grows, tumor cells furthest away from the blood vessels become low in oxygen (hypoxic). To counteract this they then signal for new blood vessels to grow. The newly formed tumor vasculature is poorly formed and does not deliver an adequate blood supply to all areas of the tumor. This leads to issues with drug delivery because many drugs will be delivered to the tumor by the circulatory system.
Resistance is a major cause of treatment failure in chemotherapeutic drugs. There are a few possible causes of resistance in cancer, one of which is the presence of small pumps on the surface of cancer cells that actively move chemotherapy from inside the cell to the outside. Cancer cells produce high amounts of these pumps, known as p-glycoprotein, in order to protect themselves from chemotherapeutics. Research on p-glycoprotein and other such chemotherapy efflux pumps is currently ongoing. Medications to inhibit the function of p-glycoprotein are undergoing investigation, but due to toxicities and interactions with anti-cancer drugs their development has been difficult. Another mechanism of resistance is gene amplification, a process in which multiple copies of a gene are produced by cancer cells. This overcomes the effect of drugs that reduce the expression of genes involved in replication. With more copies of the gene, the drug can not prevent all expression of the gene and therefore the cell can restore its proliferative ability. Cancer cells can also cause defects in the cellular pathways of apoptosis (programmed cell death). As most chemotherapy drugs kill cancer cells in this manner, defective apoptosis allows survival of these cells, making them resistant. Many chemotherapy drugs also cause DNA damage, which can be repaired by enzymes in the cell that carry out DNA repair. Upregulation of these genes can overcome the DNA damage and prevent the induction of apoptosis. Mutations in genes that produce drug target proteins, such as tubulin, can occur which prevent the drugs from binding to the protein, leading to resistance to these types of drugs. Drugs used in chemotherapy can induce cell stress, which can kill a cancer cell; however, under certain conditions, cells stress can induce changes in gene expression that enables resistance to several types of drugs. In lung cancer, the transcription factor NFκB is thought to play a role in resistance to chemotherapy, via inflammatory pathways.
Targeted therapies are a relatively new class of cancer drugs that can overcome many of the issues seen with the use of cytotoxics. They are divided into two groups: small molecule and antibodies. The massive toxicity seen with the use of cytotoxics is due to the lack of cell specificity of the drugs. They will kill any rapidly dividing cell, tumor or normal. Targeted therapies are designed to affect cellular proteins or processes that are utilised by the cancer cells. This allows a high dose to cancer tissues with a relatively low dose to other tissues. Although the side effects are often less severe than that seen of cytotoxic chemotherapeutics, life-threatening effects can occur. Initially, the targeted therapeutics were supposed to be solely selective for one protein. Now it is clear that there is often a range of protein targets that the drug can bind. An example target for targeted therapy is the BCR-ABL1 protein produced from the Philadelphia chromosome, a genetic lesion found commonly in chronic myelogenous leukemia and in some patients with acute lymphoblastic leukemia. This fusion protein has enzyme activity that can be inhibited by imatinib, a small molecule drug.
Cancer is the uncontrolled growth of cells coupled with malignant behaviour: invasion and metastasis (among other features). It is caused by the interaction between genetic susceptibility and environmental factors. These factors lead to accumulations of genetic mutations in oncogenes (genes that control the growth rate of cells) and tumor suppressor genes (genes that help to prevent cancer), which gives cancer cells their malignant characteristics, such as uncontrolled growth.
In the broad sense, most chemotherapeutic drugs work by impairing mitosis (cell division), effectively targeting fast-dividing cells. As these drugs cause damage to cells, they are termed cytotoxic. They prevent mitosis by various mechanisms including damaging DNA and inhibition of the cellular machinery involved in cell division. One theory as to why these drugs kill cancer cells is that they induce a programmed form of cell death known as apoptosis.
As chemotherapy affects cell division, tumors with high growth rates (such as acute myelogenous leukemia and the aggressive lymphomas, including Hodgkin's disease) are more sensitive to chemotherapy, as a larger proportion of the targeted cells are undergoing cell division at any time. Malignancies with slower growth rates, such as indolent lymphomas, tend to respond to chemotherapy much more modestly. Heterogeneic tumours may also display varying sensitivities to chemotherapy agents, depending on the subclonal populations within the tumor.
Cells from the immune system also make crucial contributions to the antitumor effects of chemotherapy. For example, the chemotherapeutic drugs oxaliplatin and cyclophosphamide can cause tumor cells to die in a way that is detectable by the immune system (called immunogenic cell death), which mobilizes immune cells with antitumor functions. Chemotherapeutic drugs that cause cancer immunogenic tumor cell death can make unresponsive tumors sensitive to immune checkpoint therapy.
Some chemotherapy drugs are used in diseases other than cancer, such as in autoimmune disorders, and noncancerous plasma cell dyscrasia. In some cases they are often used at lower doses, which means that the side effects are minimized, while in other cases doses similar to ones used to treat cancer are used. Methotrexate is used in the treatment of rheumatoid arthritis (RA), psoriasis, ankylosing spondylitis and multiple sclerosis. The anti-inflammatory response seen in RA is thought to be due to increases in adenosine, which causes immunosuppression; effects on immuno-regulatory cyclooxygenase-2 enzyme pathways; reduction in pro-inflammatory cytokines; and anti-proliferative properties. Although methotrexate is used to treat both multiple sclerosis and ankylosing spondylitis, its efficacy in these diseases is still uncertain. Cyclophosphamide is sometimes used to treat lupus nephritis, a common symptom of systemic lupus erythematosus. Dexamethasone along with either bortezomib or melphalan is commonly used as a treatment for AL amyloidosis. Recently, bortezomid in combination with cyclophosphamide and dexamethasone has also shown promise as a treatment for AL amyloidosis. Other drugs used to treat myeloma such as lenalidomide have shown promise in treating AL amyloidosis.
Chemotherapy drugs are also used in conditioning regimens prior to bone marrow transplant (hematopoietic stem cell transplant). Conditioning regimens are used to suppress the recipient's immune system in order to allow a transplant to engraft. Cyclophosphamide is a common cytotoxic drug used in this manner and is often used in conjunction with total body irradiation. Chemotherapeutic drugs may be used at high doses to permanently remove the recipient's bone marrow cells (myeloablative conditioning) or at lower doses that will prevent permanent bone marrow loss (non-myeloablative and reduced intensity conditioning). When used in non-cancer setting, the treatment is still called "chemotherapy", and is often done in the same treatment centers used for people with cancer.
In the 1970s, antineoplastic (chemotherapy) drugs were identified as hazardous, and the American Society of Health-System Pharmacists (ASHP) has since then introduced the concept of hazardous drugs after publishing a recommendation in 1983 regarding handling hazardous drugs. The adaptation of federal regulations came when the U.S. Occupational Safety and Health Administration (OSHA) first released its guidelines in 1986 and then updated them in 1996, 1999, and, most recently, 2006.
The National Institute for Occupational Safety and Health (NIOSH) has been conducting an assessment in the workplace since then regarding these drugs. Occupational exposure to antineoplastic drugs has been linked to multiple health effects, including infertility and possible carcinogenic effects. A few cases have been reported by the NIOSH alert report, such as one in which a female pharmacist was diagnosed with papillary transitional cell carcinoma. Twelve years before the pharmacist was diagnosed with the condition, she had worked for 20 months in a hospital where she was responsible for preparing multiple antineoplastic drugs. The pharmacist didn't have any other risk factor for cancer, and therefore, her cancer was attributed to the exposure to the antineoplastic drugs, although a cause-and-effect relationship has not been established in the literature. Another case happened when a malfunction in biosafety cabinetry is believed to have exposed nursing personnel to antineoplastic drugs. Investigations revealed evidence of genotoxic biomarkers two and nine months after that exposure.
Antineoplastic drugs are usually given through intravenous, intramuscular, intrathecal, or subcutaneous administration. In most cases, before the medication is administered to the patient, it needs to be prepared and handled by several workers. Any worker who is involved in handling, preparing, or administering the drugs, or with cleaning objects that have come into contact with antineoplastic drugs, is potentially exposed to hazardous drugs. Health care workers are exposed to drugs in different circumstances, such as when pharmacists and pharmacy technicians prepare and handle antineoplastic drugs and when nurses and physicians administer the drugs to patients. Additionally, those who are responsible for disposing antineoplastic drugs in health care facilities are also at risk of exposure.
Dermal exposure is thought to be the main route of exposure due to the fact that significant amounts of the antineoplastic agents have been found in the gloves worn by healthcare workers who prepare, handle, and administer the agents. Another noteworthy route of exposure is inhalation of the drugs' vapors. Multiple studies have investigated inhalation as a route of exposure, and although air sampling has not shown any dangerous levels, it is still a potential route of exposure. Ingestion by hand to mouth is a route of exposure that is less likely compared to others because of the enforced hygienic standard in the health institutions. However, it is still a potential route, especially in the workplace, outside of a health institute. One can also be exposed to these hazardous drugs through injection by needle sticks. Research conducted in this area has established that occupational exposure occurs by examining evidence in multiple urine samples from health care workers.
Hazardous drugs expose health care workers to serious health risks. Many studies show that antineoplastic drugs could have many side effects on the reproductive system, such as fetal loss, congenital malformation, and infertility. Health care workers who are exposed to antineoplastic drugs on many occasions have adverse reproductive outcomes such as spontaneous abortions, stillbirths, and congenital malformations. Moreover, studies have shown that exposure to these drugs leads to menstrual cycle irregularities. Antineoplastic drugs may also increase the risk of learning disabilities among children of health care workers who are exposed to these hazardous substances.
Moreover, these drugs have carcinogenic effects. In the past five decades, multiple studies have shown the carcinogenic effects of exposure to antineoplastic drugs. Similarly, there have been research studies that linked alkylating agents with humans developing leukemias. Studies have reported elevated risk of breast cancer, nonmelanoma skin cancer, and cancer of the rectum among nurses who are exposed to these drugs. Other investigations revealed that there is a potential genotoxic effect from anti-neoplastic drugs to workers in health care settings.
As of 2018, there were no occupational exposure limits set for antineoplastic drugs, i.e., OSHA or the American Conference of Governmental Industrial Hygienists (ACGIH) have not set workplace safety guidelines.
NIOSH recommends using a ventilated cabinet that is designed to decrease worker exposure. Additionally, it recommends training of all staff, the use of cabinets, implementing an initial evaluation of the technique of the safety program, and wearing protective gloves and gowns when opening drug packaging, handling vials, or labeling. When wearing personal protective equipment, one should inspect gloves for physical defects before use and always wear double gloves and protective gowns. Health care workers are also required to wash their hands with water and soap before and after working with antineoplastic drugs, change gloves every 30 minutes or whenever punctured, and discard them immediately in a chemotherapy waste container.
The gowns used should be disposable gowns made of polyethylene-coated polypropylene. When wearing gowns, individuals should make sure that the gowns are closed and have long sleeves. When preparation is done, the final product should be completely sealed in a plastic bag.
The health care worker should also wipe all waste containers inside the ventilated cabinet before removing them from the cabinet. Finally, workers should remove all protective wear and put them in a bag for their disposal inside the ventilated cabinet.
Drugs should only be administered using protective medical devices such as needle lists and closed systems and techniques such as priming of IV tubing by pharmacy personnel inside a ventilated cabinet. Workers should always wear personal protective equipment such as double gloves, goggles, and protective gowns when opening the outer bag and assembling the delivery system to deliver the drug to the patient, and when disposing of all material used in the administration of the drugs.
Hospital workers should never remove tubing from an IV bag that contains an antineoplastic drug, and when disconnecting the tubing in the system, they should make sure the tubing has been thoroughly flushed. After removing the IV bag, the workers should place it together with other disposable items directly in the yellow chemotherapy waste container with the lid closed. Protective equipment should be removed and put into a disposable chemotherapy waste container. After this has been done, one should double bag the chemotherapy waste before or after removing one's inner gloves. Moreover, one must always wash one's hands with soap and water before leaving the drug administration site.
All employees whose jobs in health care facilities expose them to hazardous drugs must receive training. Training should include shipping and receiving personnel, housekeepers, pharmacists, assistants, and all individuals involved in the transportation and storage of antineoplastic drugs. These individuals should receive information and training to inform them of the hazards of the drugs present in their areas of work. They should be informed and trained on operations and procedures in their work areas where they can encounter hazards, different methods used to detect the presence of hazardous drugs and how the hazards are released, and the physical and health hazards of the drugs, including their reproductive and carcinogenic hazard potential. Additionally, they should be informed and trained on the measures they should take to avoid and protect themselves from these hazards. This information ought to be provided when health care workers come into contact with the drugs, that is, perform the initial assignment in a work area with hazardous drugs. Moreover, training should also be provided when new hazards emerge as well as when new drugs, procedures, or equipment are introduced.
When performing cleaning and decontaminating the work area where antineoplastic drugs are used, one should make sure that there is sufficient ventilation to prevent the buildup of airborne drug concentrations. When cleaning the work surface, hospital workers should use deactivation and cleaning agents before and after each activity as well as at the end of their shifts. Cleaning should always be done using double protective gloves and disposable gowns. After employees finish up cleaning, they should dispose of the items used in the activity in a yellow chemotherapy waste container while still wearing protective gloves. After removing the gloves, they should thoroughly wash their hands with soap and water. Anything that comes into contact or has a trace of the antineoplastic drugs, such as needles, empty vials, syringes, gowns, and gloves, should be put in the chemotherapy waste container.
A written policy needs to be in place in case of a spill of antineoplastic products. The policy should address the possibility of various sizes of spills as well as the procedure and personal protective equipment required for each size. A trained worker should handle a large spill and always dispose of all cleanup materials in the chemical waste container according to EPA regulations, not in a yellow chemotherapy waste container.
A medical surveillance program must be established. In case of exposure, occupational health professionals need to ask for a detailed history and do a thorough physical exam. They should test the urine of the potentially exposed worker by doing a urine dipstick or microscopic examination, mainly looking for blood, as several antineoplastic drugs are known to cause bladder damage.
Urinary mutagenicity is a marker of exposure to antineoplastic drugs that was first used by Falck and colleagues in 1979 and uses bacterial mutagenicity assays. Apart from being nonspecific, the test can be influenced by extraneous factors such as dietary intake and smoking and is, therefore, used sparingly. However, the test played a significant role in changing the use of horizontal flow cabinets to vertical flow biological safety cabinets during the preparation of antineoplastic drugs because the former exposed health care workers to high levels of drugs. This changed the handling of drugs and effectively reduced workers' exposure to antineoplastic drugs.
Biomarkers of exposure to antineoplastic drugs commonly include urinary platinum, methotrexate, urinary cyclophosphamide and ifosfamide, and urinary metabolite of 5-fluorouracil. In addition to this, there are other drugs used to measure the drugs directly in the urine, although they are rarely used. A measurement of these drugs directly in one's urine is a sign of high exposure levels and that an uptake of the drugs is happening either through inhalation or dermally.
There is an extensive list of antineoplastic agents. Several classification schemes have been used to subdivide the medicines used for cancer into several different types.
The first use of small-molecule drugs to treat cancer was in the early 20th century, although the specific chemicals first used were not originally intended for that purpose. Mustard gas was used as a chemical warfare agent during World War I and was discovered to be a potent suppressor of hematopoiesis (blood production). A similar family of compounds known as nitrogen mustards were studied further during World War II at the Yale School of Medicine. It was reasoned that an agent that damaged the rapidly growing white blood cells might have a similar effect on cancer. Therefore, in December 1942, several people with advanced lymphomas (cancers of the lymphatic system and lymph nodes) were given the drug by vein, rather than by breathing the irritating gas. Their improvement, although temporary, was remarkable. Concurrently, during a military operation in World War II, following a German air raid on the Italian harbour of Bari, several hundred people were accidentally exposed to mustard gas, which had been transported there by the Allied forces to prepare for possible retaliation in the event of German use of chemical warfare. The survivors were later found to have very low white blood cell counts. After WWII was over and the reports declassified, the experiences converged and led researchers to look for other substances that might have similar effects against cancer. The first chemotherapy drug to be developed from this line of research was mustine. Since then, many other drugs have been developed to treat cancer, and drug development has exploded into a multibillion-dollar industry, although the principles and limitations of chemotherapy discovered by the early researchers still apply.
The word chemotherapy without a modifier usually refers to cancer treatment, but its historical meaning was broader. The term was coined in the early 1900s by Paul Ehrlich as meaning any use of chemicals to treat any disease (chemo- + -therapy), such as the use of antibiotics (antibacterial chemotherapy). Ehrlich was not optimistic that effective chemotherapy drugs would be found for the treatment of cancer. The first modern chemotherapeutic agent was arsphenamine, an arsenic compound discovered in 1907 and used to treat syphilis. This was later followed by sulfonamides (sulfa drugs) and penicillin. In today's usage, the sense "any treatment of disease with drugs" is often expressed with the word pharmacotherapy. In terms of metaphorical language, 'chemotherapy' can be paralleled with the idea of a 'storm', as both can cause distress but afterwards may have a healing/cleaning effect.
The top 10 best-selling (in terms of revenue) cancer drugs of 2013:
Specially targeted delivery vehicles aim to increase effective levels of chemotherapy for tumor cells while reducing effective levels for other cells. This should result in an increased tumor kill or reduced toxicity or both.
Antibody-drug conjugates (ADCs) comprise an antibody, drug and a linker between them. The antibody will be targeted at a preferentially expressed protein in the tumour cells (known as a tumor antigen) or on cells that the tumor can utilise, such as blood vessel endothelial cells. They bind to the tumor antigen and are internalised, where the linker releases the drug into the cell. These specially targeted delivery vehicles vary in their stability, selectivity, and choice of target, but, in essence, they all aim to increase the maximum effective dose that can be delivered to the tumor cells. Reduced systemic toxicity means that they can also be used in people who are sicker and that they can carry new chemotherapeutic agents that would have been far too toxic to deliver via traditional systemic approaches.
The first approved drug of this type was gemtuzumab ozogamicin (Mylotarg), released by Wyeth (now Pfizer). The drug was approved to treat acute myeloid leukemia. Two other drugs, trastuzumab emtansine and brentuximab vedotin, are both in late clinical trials, and the latter has been granted accelerated approval for the treatment of refractory Hodgkin's lymphoma and systemic anaplastic large cell lymphoma.
Nanoparticles are 1–1000 nanometer (nm) sized particles that can promote tumor selectivity and aid in delivering low-solubility drugs. Nanoparticles can be targeted passively or actively. Passive targeting exploits the difference between tumor blood vessels and normal blood vessels. Blood vessels in tumors are "leaky" because they have gaps from 200 to 2000 nm, which allow nanoparticles to escape into the tumor. Active targeting uses biological molecules (antibodies, proteins, DNA and receptor ligands) to preferentially target the nanoparticles to the tumor cells. There are many types of nanoparticle delivery systems, such as silica, polymers, liposomes and magnetic particles. Nanoparticles made of magnetic material can also be used to concentrate agents at tumor sites using an externally applied magnetic field. They have emerged as a useful vehicle in magnetic drug delivery for poorly soluble agents such as paclitaxel.
Electrochemotherapy is the combined treatment in which injection of a chemotherapeutic drug is followed by application of high-voltage electric pulses locally to the tumor. The treatment enables the chemotherapeutic drugs, which otherwise cannot or hardly go through the membrane of cells (such as bleomycin and cisplatin), to enter the cancer cells. Hence, greater effectiveness of antitumor treatment is achieved.
Clinical electrochemotherapy has been successfully used for treatment of cutaneous and subcutaneous tumors irrespective of their histological origin. The method has been reported as safe, simple and highly effective in all reports on clinical use of electrochemotherapy. According to the ESOPE project (European Standard Operating Procedures of Electrochemotherapy), the Standard Operating Procedures (SOP) for electrochemotherapy were prepared, based on the experience of the leading European cancer centres on electrochemotherapy. Recently, new electrochemotherapy modalities have been developed for treatment of internal tumors using surgical procedures, endoscopic routes or percutaneous approaches to gain access to the treatment area.
Hyperthermia therapy is heat treatment for cancer that can be a powerful tool when used in combination with chemotherapy (thermochemotherapy) or radiation for the control of a variety of cancers. The heat can be applied locally to the tumor site, which will dilate blood vessels to the tumor, allowing more chemotherapeutic medication to enter the tumor. Additionally, the tumor cell membrane will become more porous, further allowing more of the chemotherapeutic medicine to enter the tumor cell.
Hyperthermia has also been shown to help prevent or reverse "chemo-resistance." Chemotherapy resistance sometimes develops over time as the tumors adapt and can overcome the toxicity of the chemo medication. "Overcoming chemoresistance has been extensively studied within the past, especially using CDDP-resistant cells. In regard to the potential benefit that drug-resistant cells can be recruited for effective therapy by combining chemotherapy with hyperthermia, it was important to show that chemoresistance against several anticancer drugs (e.g. mitomycin C, anthracyclines, BCNU, melphalan) including CDDP could be reversed at least partially by the addition of heat.
Chemotherapy is used in veterinary medicine similar to how it is used in human medicine. | [
{
"paragraph_id": 0,
"text": "Chemotherapy (often abbreviated to chemo and sometimes CTX or CTx) is a type of cancer treatment that uses one or more anti-cancer drugs (chemotherapeutic agents or alkylating agents) as part of a standardized chemotherapy regimen. Chemotherapy may be given with a curative intent (which almost always involves combinations of drugs) or it may aim to prolong life or to reduce symptoms (palliative chemotherapy). Chemotherapy is one of the major categories of the medical discipline specifically devoted to pharmacotherapy for cancer, which is called medical oncology.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The term chemotherapy has come to connote non-specific usage of intracellular poisons to inhibit mitosis (cell division) or induce DNA damage, which is why inhibition of DNA repair can augment chemotherapy. The connotation of the word chemotherapy excludes more selective agents that block extracellular signals (signal transduction). The development of therapies with specific molecular or genetic targets, which inhibit growth-promoting signals from classic endocrine hormones (primarily estrogens for breast cancer and androgens for prostate cancer) are now called hormonal therapies. By contrast, other inhibitions of growth-signals like those associated with receptor tyrosine kinases are referred to as targeted therapy.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Importantly, the use of drugs (whether chemotherapy, hormonal therapy or targeted therapy) constitutes systemic therapy for cancer in that they are introduced into the blood stream and are therefore in principle able to address cancer at any anatomic location in the body. Systemic therapy is often used in conjunction with other modalities that constitute local therapy (i.e., treatments whose efficacy is confined to the anatomic area where they are applied) for cancer such as radiation therapy, surgery or hyperthermia therapy.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Traditional chemotherapeutic agents are cytotoxic by means of interfering with cell division (mitosis) but cancer cells vary widely in their susceptibility to these agents. To a large extent, chemotherapy can be thought of as a way to damage or stress cells, which may then lead to cell death if apoptosis is initiated. Many of the side effects of chemotherapy can be traced to damage to normal cells that divide rapidly and are thus sensitive to anti-mitotic drugs: cells in the bone marrow, digestive tract and hair follicles. This results in the most common side-effects of chemotherapy: myelosuppression (decreased production of blood cells, hence also immunosuppression), mucositis (inflammation of the lining of the digestive tract), and alopecia (hair loss). Because of the effect on immune cells (especially lymphocytes), chemotherapy drugs often find use in a host of diseases that result from harmful overactivity of the immune system against self (so-called autoimmunity). These include rheumatoid arthritis, systemic lupus erythematosus, multiple sclerosis, vasculitis and many others.",
"title": ""
},
{
"paragraph_id": 4,
"text": "There are a number of strategies in the administration of chemotherapeutic drugs used today. Chemotherapy may be given with a curative intent or it may aim to prolong life or to palliate symptoms.",
"title": "Treatment strategies"
},
{
"paragraph_id": 5,
"text": "All chemotherapy regimens require that the recipient be capable of undergoing the treatment. Performance status is often used as a measure to determine whether a person can receive chemotherapy, or whether dose reduction is required. Because only a fraction of the cells in a tumor die with each treatment (fractional kill), repeated doses must be administered to continue to reduce the size of the tumor. Current chemotherapy regimens apply drug treatment in cycles, with the frequency and duration of treatments limited by toxicity.",
"title": "Treatment strategies"
},
{
"paragraph_id": 6,
"text": "The effectiveness of chemotherapy depends on the type of cancer and the stage. The overall effectiveness ranges from being curative for some cancers, such as some leukemias, to being ineffective, such as in some brain tumors, to being needless in others, like most non-melanoma skin cancers.",
"title": "Treatment strategies"
},
{
"paragraph_id": 7,
"text": "Dosage of chemotherapy can be difficult: If the dose is too low, it will be ineffective against the tumor, whereas, at excessive doses, the toxicity (side-effects) will be intolerable to the person receiving it. The standard method of determining chemotherapy dosage is based on calculated body surface area (BSA). The BSA is usually calculated with a mathematical formula or a nomogram, using the recipient's weight and height, rather than by direct measurement of body area. This formula was originally derived in a 1916 study and attempted to translate medicinal doses established with laboratory animals to equivalent doses for humans. The study only included nine human subjects. When chemotherapy was introduced in the 1950s, the BSA formula was adopted as the official standard for chemotherapy dosing for lack of a better option.",
"title": "Treatment strategies"
},
{
"paragraph_id": 8,
"text": "The validity of this method in calculating uniform doses has been questioned because the formula only takes into account the individual's weight and height. Drug absorption and clearance are influenced by multiple factors, including age, sex, metabolism, disease state, organ function, drug-to-drug interactions, genetics, and obesity, which have major impacts on the actual concentration of the drug in the person's bloodstream. As a result, there is high variability in the systemic chemotherapy drug concentration in people dosed by BSA, and this variability has been demonstrated to be more than ten-fold for many drugs. In other words, if two people receive the same dose of a given drug based on BSA, the concentration of that drug in the bloodstream of one person may be 10 times higher or lower compared to that of the other person. This variability is typical with many chemotherapy drugs dosed by BSA, and, as shown below, was demonstrated in a study of 14 common chemotherapy drugs.",
"title": "Treatment strategies"
},
{
"paragraph_id": 9,
"text": "The result of this pharmacokinetic variability among people is that many people do not receive the right dose to achieve optimal treatment effectiveness with minimized toxic side effects. Some people are overdosed while others are underdosed. For example, in a randomized clinical trial, investigators found 85% of metastatic colorectal cancer patients treated with 5-fluorouracil (5-FU) did not receive the optimal therapeutic dose when dosed by the BSA standard—68% were underdosed and 17% were overdosed.",
"title": "Treatment strategies"
},
{
"paragraph_id": 10,
"text": "There has been controversy over the use of BSA to calculate chemotherapy doses for people who are obese. Because of their higher BSA, clinicians often arbitrarily reduce the dose prescribed by the BSA formula for fear of overdosing. In many cases, this can result in sub-optimal treatment.",
"title": "Treatment strategies"
},
{
"paragraph_id": 11,
"text": "Several clinical studies have demonstrated that when chemotherapy dosing is individualized to achieve optimal systemic drug exposure, treatment outcomes are improved and toxic side effects are reduced. In the 5-FU clinical study cited above, people whose dose was adjusted to achieve a pre-determined target exposure realized an 84% improvement in treatment response rate and a six-month improvement in overall survival (OS) compared with those dosed by BSA.",
"title": "Treatment strategies"
},
{
"paragraph_id": 12,
"text": "In the same study, investigators compared the incidence of common 5-FU-associated grade 3/4 toxicities between the dose-adjusted people and people dosed per BSA. The incidence of debilitating grades of diarrhea was reduced from 18% in the BSA-dosed group to 4% in the dose-adjusted group and serious hematologic side effects were eliminated. Because of the reduced toxicity, dose-adjusted patients were able to be treated for longer periods of time. BSA-dosed people were treated for a total of 680 months while people in the dose-adjusted group were treated for a total of 791 months. Completing the course of treatment is an important factor in achieving better treatment outcomes.",
"title": "Treatment strategies"
},
{
"paragraph_id": 13,
"text": "Similar results were found in a study involving people with colorectal cancer who have been treated with the popular FOLFOX regimen. The incidence of serious diarrhea was reduced from 12% in the BSA-dosed group of patients to 1.7% in the dose-adjusted group, and the incidence of severe mucositis was reduced from 15% to 0.8%.",
"title": "Treatment strategies"
},
{
"paragraph_id": 14,
"text": "The FOLFOX study also demonstrated an improvement in treatment outcomes. Positive response increased from 46% in the BSA-dosed group to 70% in the dose-adjusted group. Median progression free survival (PFS) and overall survival (OS) both improved by six months in the dose adjusted group.",
"title": "Treatment strategies"
},
{
"paragraph_id": 15,
"text": "One approach that can help clinicians individualize chemotherapy dosing is to measure the drug levels in blood plasma over time and adjust dose according to a formula or algorithm to achieve optimal exposure. With an established target exposure for optimized treatment effectiveness with minimized toxicities, dosing can be personalized to achieve target exposure and optimal results for each person. Such an algorithm was used in the clinical trials cited above and resulted in significantly improved treatment outcomes.",
"title": "Treatment strategies"
},
{
"paragraph_id": 16,
"text": "Oncologists are already individualizing dosing of some cancer drugs based on exposure. Carboplatin and busulfan dosing rely upon results from blood tests to calculate the optimal dose for each person. Simple blood tests are also available for dose optimization of methotrexate, 5-FU, paclitaxel, and docetaxel.",
"title": "Treatment strategies"
},
{
"paragraph_id": 17,
"text": "The serum albumin level immediately prior to chemotherapy administration is an independent prognostic predictor of survival in various cancer types.",
"title": "Treatment strategies"
},
{
"paragraph_id": 18,
"text": "Alkylating agents are the oldest group of chemotherapeutics in use today. Originally derived from mustard gas used in World War I, there are now many types of alkylating agents in use. They are so named because of their ability to alkylate many molecules, including proteins, RNA and DNA. This ability to bind covalently to DNA via their alkyl group is the primary cause for their anti-cancer effects. DNA is made of two strands and the molecules may either bind twice to one strand of DNA (intrastrand crosslink) or may bind once to both strands (interstrand crosslink). If the cell tries to replicate crosslinked DNA during cell division, or tries to repair it, the DNA strands can break. This leads to a form of programmed cell death called apoptosis. Alkylating agents will work at any point in the cell cycle and thus are known as cell cycle-independent drugs. For this reason, the effect on the cell is dose dependent; the fraction of cells that die is directly proportional to the dose of drug.",
"title": "Treatment strategies"
},
{
"paragraph_id": 19,
"text": "The subtypes of alkylating agents are the nitrogen mustards, nitrosoureas, tetrazines, aziridines, cisplatins and derivatives, and non-classical alkylating agents. Nitrogen mustards include mechlorethamine, cyclophosphamide, melphalan, chlorambucil, ifosfamide and busulfan. Nitrosoureas include N-Nitroso-N-methylurea (MNU), carmustine (BCNU), lomustine (CCNU) and semustine (MeCCNU), fotemustine and streptozotocin. Tetrazines include dacarbazine, mitozolomide and temozolomide. Aziridines include thiotepa, mytomycin and diaziquone (AZQ). Cisplatin and derivatives include cisplatin, carboplatin and oxaliplatin. They impair cell function by forming covalent bonds with the amino, carboxyl, sulfhydryl, and phosphate groups in biologically important molecules. Non-classical alkylating agents include procarbazine and hexamethylmelamine.",
"title": "Treatment strategies"
},
{
"paragraph_id": 20,
"text": "Anti-metabolites are a group of molecules that impede DNA and RNA synthesis. Many of them have a similar structure to the building blocks of DNA and RNA. The building blocks are nucleotides; a molecule comprising a nucleobase, a sugar and a phosphate group. The nucleobases are divided into purines (guanine and adenine) and pyrimidines (cytosine, thymine and uracil). Anti-metabolites resemble either nucleobases or nucleosides (a nucleotide without the phosphate group), but have altered chemical groups. These drugs exert their effect by either blocking the enzymes required for DNA synthesis or becoming incorporated into DNA or RNA. By inhibiting the enzymes involved in DNA synthesis, they prevent mitosis because the DNA cannot duplicate itself. Also, after misincorporation of the molecules into DNA, DNA damage can occur and programmed cell death (apoptosis) is induced. Unlike alkylating agents, anti-metabolites are cell cycle dependent. This means that they only work during a specific part of the cell cycle, in this case S-phase (the DNA synthesis phase). For this reason, at a certain dose, the effect plateaus and proportionally no more cell death occurs with increased doses. Subtypes of the anti-metabolites are the anti-folates, fluoropyrimidines, deoxynucleoside analogues and thiopurines.",
"title": "Treatment strategies"
},
{
"paragraph_id": 21,
"text": "The anti-folates include methotrexate and pemetrexed. Methotrexate inhibits dihydrofolate reductase (DHFR), an enzyme that regenerates tetrahydrofolate from dihydrofolate. When the enzyme is inhibited by methotrexate, the cellular levels of folate coenzymes diminish. These are required for thymidylate and purine production, which are both essential for DNA synthesis and cell division. Pemetrexed is another anti-metabolite that affects purine and pyrimidine production, and therefore also inhibits DNA synthesis. It primarily inhibits the enzyme thymidylate synthase, but also has effects on DHFR, aminoimidazole carboxamide ribonucleotide formyltransferase and glycinamide ribonucleotide formyltransferase. The fluoropyrimidines include fluorouracil and capecitabine. Fluorouracil is a nucleobase analogue that is metabolised in cells to form at least two active products; 5-fluourouridine monophosphate (FUMP) and 5-fluoro-2'-deoxyuridine 5'-phosphate (fdUMP). FUMP becomes incorporated into RNA and fdUMP inhibits the enzyme thymidylate synthase; both of which lead to cell death. Capecitabine is a prodrug of 5-fluorouracil that is broken down in cells to produce the active drug. The deoxynucleoside analogues include cytarabine, gemcitabine, decitabine, azacitidine, fludarabine, nelarabine, cladribine, clofarabine, and pentostatin. The thiopurines include thioguanine and mercaptopurine.",
"title": "Treatment strategies"
},
{
"paragraph_id": 22,
"text": "Anti-microtubule agents are plant-derived chemicals that block cell division by preventing microtubule function. Microtubules are an important cellular structure composed of two proteins, α-tubulin and β-tubulin. They are hollow, rod-shaped structures that are required for cell division, among other cellular functions. Microtubules are dynamic structures, which means that they are permanently in a state of assembly and disassembly. Vinca alkaloids and taxanes are the two main groups of anti-microtubule agents, and although both of these groups of drugs cause microtubule dysfunction, their mechanisms of action are completely opposite: Vinca alkaloids prevent the assembly of microtubules, whereas taxanes prevent their disassembly. By doing so, they can induce mitotic catastrophe in the cancer cells. Following this, cell cycle arrest occurs, which induces programmed cell death (apoptosis). These drugs can also affect blood vessel growth, an essential process that tumours utilise in order to grow and metastasise.",
"title": "Treatment strategies"
},
{
"paragraph_id": 23,
"text": "Vinca alkaloids are derived from the Madagascar periwinkle, Catharanthus roseus, formerly known as Vinca rosea. They bind to specific sites on tubulin, inhibiting the assembly of tubulin into microtubules. The original vinca alkaloids are natural products that include vincristine and vinblastine. Following the success of these drugs, semi-synthetic vinca alkaloids were produced: vinorelbine (used in the treatment of non-small-cell lung cancer), vindesine, and vinflunine. These drugs are cell cycle-specific. They bind to the tubulin molecules in S-phase and prevent proper microtubule formation required for M-phase.",
"title": "Treatment strategies"
},
{
"paragraph_id": 24,
"text": "Taxanes are natural and semi-synthetic drugs. The first drug of their class, paclitaxel, was originally extracted from Taxus brevifolia, the Pacific yew. Now this drug and another in this class, docetaxel, are produced semi-synthetically from a chemical found in the bark of another yew tree, Taxus baccata.",
"title": "Treatment strategies"
},
{
"paragraph_id": 25,
"text": "Podophyllotoxin is an antineoplastic lignan obtained primarily from the American mayapple (Podophyllum peltatum) and Himalayan mayapple (Sinopodophyllum hexandrum). It has anti-microtubule activity, and its mechanism is similar to that of vinca alkaloids in that they bind to tubulin, inhibiting microtubule formation. Podophyllotoxin is used to produce two other drugs with different mechanisms of action: etoposide and teniposide.",
"title": "Treatment strategies"
},
{
"paragraph_id": 26,
"text": "Topoisomerase inhibitors are drugs that affect the activity of two enzymes: topoisomerase I and topoisomerase II. When the DNA double-strand helix is unwound, during DNA replication or transcription, for example, the adjacent unopened DNA winds tighter (supercoils), like opening the middle of a twisted rope. The stress caused by this effect is in part aided by the topoisomerase enzymes. They produce single- or double-strand breaks into DNA, reducing the tension in the DNA strand. This allows the normal unwinding of DNA to occur during replication or transcription. Inhibition of topoisomerase I or II interferes with both of these processes.",
"title": "Treatment strategies"
},
{
"paragraph_id": 27,
"text": "Two topoisomerase I inhibitors, irinotecan and topotecan, are semi-synthetically derived from camptothecin, which is obtained from the Chinese ornamental tree Camptotheca acuminata. Drugs that target topoisomerase II can be divided into two groups. The topoisomerase II poisons cause increased levels enzymes bound to DNA. This prevents DNA replication and transcription, causes DNA strand breaks, and leads to programmed cell death (apoptosis). These agents include etoposide, doxorubicin, mitoxantrone and teniposide. The second group, catalytic inhibitors, are drugs that block the activity of topoisomerase II, and therefore prevent DNA synthesis and translation because the DNA cannot unwind properly. This group includes novobiocin, merbarone, and aclarubicin, which also have other significant mechanisms of action.",
"title": "Treatment strategies"
},
{
"paragraph_id": 28,
"text": "The cytotoxic antibiotics are a varied group of drugs that have various mechanisms of action. The common theme that they share in their chemotherapy indication is that they interrupt cell division. The most important subgroup is the anthracyclines and the bleomycins; other prominent examples include mitomycin C and actinomycin.",
"title": "Treatment strategies"
},
{
"paragraph_id": 29,
"text": "Among the anthracyclines, doxorubicin and daunorubicin were the first, and were obtained from the bacterium Streptomyces peucetius. Derivatives of these compounds include epirubicin and idarubicin. Other clinically used drugs in the anthracycline group are pirarubicin, aclarubicin, and mitoxantrone. The mechanisms of anthracyclines include DNA intercalation (molecules insert between the two strands of DNA), generation of highly reactive free radicals that damage intercellular molecules and topoisomerase inhibition.",
"title": "Treatment strategies"
},
{
"paragraph_id": 30,
"text": "Actinomycin is a complex molecule that intercalates DNA and prevents RNA synthesis.",
"title": "Treatment strategies"
},
{
"paragraph_id": 31,
"text": "Bleomycin, a glycopeptide isolated from Streptomyces verticillus, also intercalates DNA, but produces free radicals that damage DNA. This occurs when bleomycin binds to a metal ion, becomes chemically reduced and reacts with oxygen.",
"title": "Treatment strategies"
},
{
"paragraph_id": 32,
"text": "Mitomycin is a cytotoxic antibiotic with the ability to alkylate DNA.",
"title": "Treatment strategies"
},
{
"paragraph_id": 33,
"text": "Most chemotherapy is delivered intravenously, although a number of agents can be administered orally (e.g., melphalan, busulfan, capecitabine). According to a recent (2016) systematic review, oral therapies present additional challenges for patients and care teams to maintain and support adherence to treatment plans.",
"title": "Treatment strategies"
},
{
"paragraph_id": 34,
"text": "There are many intravenous methods of drug delivery, known as vascular access devices. These include the winged infusion device, peripheral venous catheter, midline catheter, peripherally inserted central catheter (PICC), central venous catheter and implantable port. The devices have different applications regarding duration of chemotherapy treatment, method of delivery and types of chemotherapeutic agent.",
"title": "Treatment strategies"
},
{
"paragraph_id": 35,
"text": "Depending on the person, the cancer, the stage of cancer, the type of chemotherapy, and the dosage, intravenous chemotherapy may be given on either an inpatient or an outpatient basis. For continuous, frequent or prolonged intravenous chemotherapy administration, various systems may be surgically inserted into the vasculature to maintain access. Commonly used systems are the Hickman line, the Port-a-Cath, and the PICC line. These have a lower infection risk, are much less prone to phlebitis or extravasation, and eliminate the need for repeated insertion of peripheral cannulae.",
"title": "Treatment strategies"
},
{
"paragraph_id": 36,
"text": "Isolated limb perfusion (often used in melanoma), or isolated infusion of chemotherapy into the liver or the lung have been used to treat some tumors. The main purpose of these approaches is to deliver a very high dose of chemotherapy to tumor sites without causing overwhelming systemic damage. These approaches can help control solitary or limited metastases, but they are by definition not systemic, and, therefore, do not treat distributed metastases or micrometastases.",
"title": "Treatment strategies"
},
{
"paragraph_id": 37,
"text": "Topical chemotherapies, such as 5-fluorouracil, are used to treat some cases of non-melanoma skin cancer.",
"title": "Treatment strategies"
},
{
"paragraph_id": 38,
"text": "If the cancer has central nervous system involvement, or with meningeal disease, intrathecal chemotherapy may be administered.",
"title": "Treatment strategies"
},
{
"paragraph_id": 39,
"text": "Chemotherapeutic techniques have a range of side effects that depend on the type of medications used. The most common medications affect mainly the fast-dividing cells of the body, such as blood cells and the cells lining the mouth, stomach, and intestines. Chemotherapy-related toxicities can occur acutely after administration, within hours or days, or chronically, from weeks to years.",
"title": "Adverse effects"
},
{
"paragraph_id": 40,
"text": "Virtually all chemotherapeutic regimens can cause depression of the immune system, often by paralysing the bone marrow and leading to a decrease of white blood cells, red blood cells, and platelets. Anemia and thrombocytopenia may require blood transfusion. Neutropenia (a decrease of the neutrophil granulocyte count below 0.5 x 10/litre) can be improved with synthetic G-CSF (granulocyte-colony-stimulating factor, e.g., filgrastim, lenograstim, efbemalenograstim alfa).",
"title": "Adverse effects"
},
{
"paragraph_id": 41,
"text": "In very severe myelosuppression, which occurs in some regimens, almost all the bone marrow stem cells (cells that produce white and red blood cells) are destroyed, meaning allogenic or autologous bone marrow cell transplants are necessary. (In autologous BMTs, cells are removed from the person before the treatment, multiplied and then re-injected afterward; in allogenic BMTs, the source is a donor.) However, some people still develop diseases because of this interference with bone marrow.",
"title": "Adverse effects"
},
{
"paragraph_id": 42,
"text": "Although people receiving chemotherapy are encouraged to wash their hands, avoid sick people, and take other infection-reducing steps, about 85% of infections are due to naturally occurring microorganisms in the person's own gastrointestinal tract (including oral cavity) and skin. This may manifest as systemic infections, such as sepsis, or as localized outbreaks, such as Herpes simplex, shingles, or other members of the Herpesviridea. The risk of illness and death can be reduced by taking common antibiotics such as quinolones or trimethoprim/sulfamethoxazole before any fever or sign of infection appears. Quinolones show effective prophylaxis mainly with hematological cancer. However, in general, for every five people who are immunosuppressed following chemotherapy who take an antibiotic, one fever can be prevented; for every 34 who take an antibiotic, one death can be prevented. Sometimes, chemotherapy treatments are postponed because the immune system is suppressed to a critically low level.",
"title": "Adverse effects"
},
{
"paragraph_id": 43,
"text": "In Japan, the government has approved the use of some medicinal mushrooms like Trametes versicolor, to counteract depression of the immune system in people undergoing chemotherapy.",
"title": "Adverse effects"
},
{
"paragraph_id": 44,
"text": "Trilaciclib is an inhibitor of cyclin-dependent kinase 4/6 approved for the prevention of myelosuppression caused by chemotherapy. The drug is given before chemotherapy to protect bone marrow function.",
"title": "Adverse effects"
},
{
"paragraph_id": 45,
"text": "Due to immune system suppression, neutropenic enterocolitis (typhlitis) is a \"life-threatening gastrointestinal complication of chemotherapy.\" Typhlitis is an intestinal infection which may manifest itself through symptoms including nausea, vomiting, diarrhea, a distended abdomen, fever, chills, or abdominal pain and tenderness.",
"title": "Adverse effects"
},
{
"paragraph_id": 46,
"text": "Typhlitis is a medical emergency. It has a very poor prognosis and is often fatal unless promptly recognized and aggressively treated. Successful treatment hinges on early diagnosis provided by a high index of suspicion and the use of CT scanning, nonoperative treatment for uncomplicated cases, and sometimes elective right hemicolectomy to prevent recurrence.",
"title": "Adverse effects"
},
{
"paragraph_id": 47,
"text": "Nausea, vomiting, anorexia, diarrhea, abdominal cramps, and constipation are common side-effects of chemotherapeutic medications that kill fast-dividing cells. Malnutrition and dehydration can result when the recipient does not eat or drink enough, or when the person vomits frequently, because of gastrointestinal damage. This can result in rapid weight loss, or occasionally in weight gain, if the person eats too much in an effort to allay nausea or heartburn. Weight gain can also be caused by some steroid medications. These side-effects can frequently be reduced or eliminated with antiemetic drugs. Low-certainty evidence also suggests that probiotics may have a preventative and treatment effect of diarrhoea related to chemotherapy alone and with radiotherapy. However, a high index of suspicion is appropriate, since diarrhoea and bloating are also symptoms of typhlitis, a very serious and potentially life-threatening medical emergency that requires immediate treatment.",
"title": "Adverse effects"
},
{
"paragraph_id": 48,
"text": "Anemia can be a combined outcome caused by myelosuppressive chemotherapy, and possible cancer-related causes such as bleeding, blood cell destruction (hemolysis), hereditary disease, kidney dysfunction, nutritional deficiencies or anemia of chronic disease. Treatments to mitigate anemia include hormones to boost blood production (erythropoietin), iron supplements, and blood transfusions. Myelosuppressive therapy can cause a tendency to bleed easily, leading to anemia. Medications that kill rapidly dividing cells or blood cells can reduce the number of platelets in the blood, which can result in bruises and bleeding. Extremely low platelet counts may be temporarily boosted through platelet transfusions and new drugs to increase platelet counts during chemotherapy are being developed. Sometimes, chemotherapy treatments are postponed to allow platelet counts to recover.",
"title": "Adverse effects"
},
{
"paragraph_id": 49,
"text": "Fatigue may be a consequence of the cancer or its treatment, and can last for months to years after treatment. One physiological cause of fatigue is anemia, which can be caused by chemotherapy, surgery, radiotherapy, primary and metastatic disease or nutritional depletion. Aerobic exercise has been found to be beneficial in reducing fatigue in people with solid tumours.",
"title": "Adverse effects"
},
{
"paragraph_id": 50,
"text": "Nausea and vomiting are two of the most feared cancer treatment-related side-effects for people with cancer and their families. In 1983, Coates et al. found that people receiving chemotherapy ranked nausea and vomiting as the first and second most severe side-effects, respectively. Up to 20% of people receiving highly emetogenic agents in this era postponed, or even refused potentially curative treatments. Chemotherapy-induced nausea and vomiting (CINV) are common with many treatments and some forms of cancer. Since the 1990s, several novel classes of antiemetics have been developed and commercialized, becoming a nearly universal standard in chemotherapy regimens, and helping to successfully manage these symptoms in many people. Effective mediation of these unpleasant and sometimes debilitating symptoms results in increased quality of life for the recipient and more efficient treatment cycles, due to less stoppage of treatment due to better tolerance and better overall health.",
"title": "Adverse effects"
},
{
"paragraph_id": 51,
"text": "Hair loss (alopecia) can be caused by chemotherapy that kills rapidly dividing cells; other medications may cause hair to thin. These are most often temporary effects: hair usually starts to regrow a few weeks after the last treatment, but sometimes with a change in color, texture, thickness or style. Sometimes hair has a tendency to curl after regrowth, resulting in \"chemo curls.\" Severe hair loss occurs most often with drugs such as doxorubicin, daunorubicin, paclitaxel, docetaxel, cyclophosphamide, ifosfamide and etoposide. Permanent thinning or hair loss can result from some standard chemotherapy regimens.",
"title": "Adverse effects"
},
{
"paragraph_id": 52,
"text": "Chemotherapy induced hair loss occurs by a non-androgenic mechanism, and can manifest as alopecia totalis, telogen effluvium, or less often alopecia areata. It is usually associated with systemic treatment due to the high mitotic rate of hair follicles, and more reversible than androgenic hair loss, although permanent cases can occur. Chemotherapy induces hair loss in women more often than men.",
"title": "Adverse effects"
},
{
"paragraph_id": 53,
"text": "Scalp cooling offers a means of preventing both permanent and temporary hair loss; however, concerns about this method have been raised.",
"title": "Adverse effects"
},
{
"paragraph_id": 54,
"text": "Development of secondary neoplasia after successful chemotherapy or radiotherapy treatment can occur. The most common secondary neoplasm is secondary acute myeloid leukemia, which develops primarily after treatment with alkylating agents or topoisomerase inhibitors. Survivors of childhood cancer are more than 13 times as likely to get a secondary neoplasm during the 30 years after treatment than the general population. Not all of this increase can be attributed to chemotherapy.",
"title": "Adverse effects"
},
{
"paragraph_id": 55,
"text": "Some types of chemotherapy are gonadotoxic and may cause infertility. Chemotherapies with high risk include procarbazine and other alkylating drugs such as cyclophosphamide, ifosfamide, busulfan, melphalan, chlorambucil, and chlormethine. Drugs with medium risk include doxorubicin and platinum analogs such as cisplatin and carboplatin. On the other hand, therapies with low risk of gonadotoxicity include plant derivatives such as vincristine and vinblastine, antibiotics such as bleomycin and dactinomycin, and antimetabolites such as methotrexate, mercaptopurine, and 5-fluorouracil.",
"title": "Adverse effects"
},
{
"paragraph_id": 56,
"text": "Female infertility by chemotherapy appears to be secondary to premature ovarian failure by loss of primordial follicles. This loss is not necessarily a direct effect of the chemotherapeutic agents, but could be due to an increased rate of growth initiation to replace damaged developing follicles.",
"title": "Adverse effects"
},
{
"paragraph_id": 57,
"text": "People may choose between several methods of fertility preservation prior to chemotherapy, including cryopreservation of semen, ovarian tissue, oocytes, or embryos. As more than half of cancer patients are elderly, this adverse effect is only relevant for a minority of patients. A study in France between 1999 and 2011 came to the result that embryo freezing before administration of gonadotoxic agents to females caused a delay of treatment in 34% of cases, and a live birth in 27% of surviving cases who wanted to become pregnant, with the follow-up time varying between 1 and 13 years.",
"title": "Adverse effects"
},
{
"paragraph_id": 58,
"text": "Potential protective or attenuating agents include GnRH analogs, where several studies have shown a protective effect in vivo in humans, but some studies show no such effect. Sphingosine-1-phosphate (S1P) has shown similar effect, but its mechanism of inhibiting the sphingomyelin apoptotic pathway may also interfere with the apoptosis action of chemotherapy drugs.",
"title": "Adverse effects"
},
{
"paragraph_id": 59,
"text": "In chemotherapy as a conditioning regimen in hematopoietic stem cell transplantation, a study of people conditioned with cyclophosphamide alone for severe aplastic anemia came to the result that ovarian recovery occurred in all women younger than 26 years at time of transplantation, but only in five of 16 women older than 26 years.",
"title": "Adverse effects"
},
{
"paragraph_id": 60,
"text": "Chemotherapy is teratogenic during pregnancy, especially during the first trimester, to the extent that abortion usually is recommended if pregnancy in this period is found during chemotherapy. Second- and third-trimester exposure does not usually increase the teratogenic risk and adverse effects on cognitive development, but it may increase the risk of various complications of pregnancy and fetal myelosuppression.",
"title": "Adverse effects"
},
{
"paragraph_id": 61,
"text": "In males previously having undergone chemotherapy or radiotherapy, there appears to be no increase in genetic defects or congenital malformations in their children conceived after therapy. The use of assisted reproductive technologies and micromanipulation techniques might increase this risk. In females previously having undergone chemotherapy, miscarriage and congenital malformations are not increased in subsequent conceptions. However, when in vitro fertilization and embryo cryopreservation is practised between or shortly after treatment, possible genetic risks to the growing oocytes exist, and hence it has been recommended that the babies be screened.",
"title": "Adverse effects"
},
{
"paragraph_id": 62,
"text": "Between 30 and 40 percent of people undergoing chemotherapy experience chemotherapy-induced peripheral neuropathy (CIPN), a progressive, enduring, and often irreversible condition, causing pain, tingling, numbness and sensitivity to cold, beginning in the hands and feet and sometimes progressing to the arms and legs. Chemotherapy drugs associated with CIPN include thalidomide, epothilones, vinca alkaloids, taxanes, proteasome inhibitors, and the platinum-based drugs. Whether CIPN arises, and to what degree, is determined by the choice of drug, duration of use, the total amount consumed and whether the person already has peripheral neuropathy. Though the symptoms are mainly sensory, in some cases motor nerves and the autonomic nervous system are affected. CIPN often follows the first chemotherapy dose and increases in severity as treatment continues, but this progression usually levels off at completion of treatment. The platinum-based drugs are the exception; with these drugs, sensation may continue to deteriorate for several months after the end of treatment. Some CIPN appears to be irreversible. Pain can often be managed with drug or other treatment but the numbness is usually resistant to treatment.",
"title": "Adverse effects"
},
{
"paragraph_id": 63,
"text": "Some people receiving chemotherapy report fatigue or non-specific neurocognitive problems, such as an inability to concentrate; this is sometimes called post-chemotherapy cognitive impairment, referred to as \"chemo brain\" in popular and social media.",
"title": "Adverse effects"
},
{
"paragraph_id": 64,
"text": "In particularly large tumors and cancers with high white cell counts, such as lymphomas, teratomas, and some leukemias, some people develop tumor lysis syndrome. The rapid breakdown of cancer cells causes the release of chemicals from the inside of the cells. Following this, high levels of uric acid, potassium and phosphate are found in the blood. High levels of phosphate induce secondary hypoparathyroidism, resulting in low levels of calcium in the blood. This causes kidney damage and the high levels of potassium can cause cardiac arrhythmia. Although prophylaxis is available and is often initiated in people with large tumors, this is a dangerous side-effect that can lead to death if left untreated.",
"title": "Adverse effects"
},
{
"paragraph_id": 65,
"text": "Cardiotoxicity (heart damage) is especially prominent with the use of anthracycline drugs (doxorubicin, epirubicin, idarubicin, and liposomal doxorubicin). The cause of this is most likely due to the production of free radicals in the cell and subsequent DNA damage. Other chemotherapeutic agents that cause cardiotoxicity, but at a lower incidence, are cyclophosphamide, docetaxel and clofarabine.",
"title": "Adverse effects"
},
{
"paragraph_id": 66,
"text": "Hepatotoxicity (liver damage) can be caused by many cytotoxic drugs. The susceptibility of an individual to liver damage can be altered by other factors such as the cancer itself, viral hepatitis, immunosuppression and nutritional deficiency. The liver damage can consist of damage to liver cells, hepatic sinusoidal syndrome (obstruction of the veins in the liver), cholestasis (where bile does not flow from the liver to the intestine) and liver fibrosis.",
"title": "Adverse effects"
},
{
"paragraph_id": 67,
"text": "Nephrotoxicity (kidney damage) can be caused by tumor lysis syndrome and also due direct effects of drug clearance by the kidneys. Different drugs will affect different parts of the kidney and the toxicity may be asymptomatic (only seen on blood or urine tests) or may cause acute kidney injury.",
"title": "Adverse effects"
},
{
"paragraph_id": 68,
"text": "Ototoxicity (damage to the inner ear) is a common side effect of platinum based drugs that can produce symptoms such as dizziness and vertigo. Children treated with platinum analogues have been found to be at risk for developing hearing loss.",
"title": "Adverse effects"
},
{
"paragraph_id": 69,
"text": "Less common side-effects include red skin (erythema), dry skin, damaged fingernails, a dry mouth (xerostomia), water retention, and sexual impotence. Some medications can trigger allergic or pseudoallergic reactions.",
"title": "Adverse effects"
},
{
"paragraph_id": 70,
"text": "Specific chemotherapeutic agents are associated with organ-specific toxicities, including cardiovascular disease (e.g., doxorubicin), interstitial lung disease (e.g., bleomycin) and occasionally secondary neoplasm (e.g., MOPP therapy for Hodgkin's disease).",
"title": "Adverse effects"
},
{
"paragraph_id": 71,
"text": "Hand-foot syndrome is another side effect to cytotoxic chemotherapy.",
"title": "Adverse effects"
},
{
"paragraph_id": 72,
"text": "Nutritional problems are also frequently seen in cancer patients at diagnosis and through chemotherapy treatment. Research suggests that in children and young people undergoing cancer treatment, parenteral nutrition may help with this leading to weight gain and increased calorie and protein intake, when compared to enteral nutrition.",
"title": "Adverse effects"
},
{
"paragraph_id": 73,
"text": "Chemotherapy does not always work, and even when it is useful, it may not completely destroy the cancer. People frequently fail to understand its limitations. In one study of people who had been newly diagnosed with incurable, stage 4 cancer, more than two-thirds of people with lung cancer and more than four-fifths of people with colorectal cancer still believed that chemotherapy was likely to cure their cancer.",
"title": "Limitations"
},
{
"paragraph_id": 74,
"text": "The blood–brain barrier poses an obstacle to delivery of chemotherapy to the brain. This is because the brain has an extensive system in place to protect it from harmful chemicals. Drug transporters can pump out drugs from the brain and brain's blood vessel cells into the cerebrospinal fluid and blood circulation. These transporters pump out most chemotherapy drugs, which reduces their efficacy for treatment of brain tumors. Only small lipophilic alkylating agents such as lomustine or temozolomide are able to cross this blood–brain barrier.",
"title": "Limitations"
},
{
"paragraph_id": 75,
"text": "Blood vessels in tumors are very different from those seen in normal tissues. As a tumor grows, tumor cells furthest away from the blood vessels become low in oxygen (hypoxic). To counteract this they then signal for new blood vessels to grow. The newly formed tumor vasculature is poorly formed and does not deliver an adequate blood supply to all areas of the tumor. This leads to issues with drug delivery because many drugs will be delivered to the tumor by the circulatory system.",
"title": "Limitations"
},
{
"paragraph_id": 76,
"text": "Resistance is a major cause of treatment failure in chemotherapeutic drugs. There are a few possible causes of resistance in cancer, one of which is the presence of small pumps on the surface of cancer cells that actively move chemotherapy from inside the cell to the outside. Cancer cells produce high amounts of these pumps, known as p-glycoprotein, in order to protect themselves from chemotherapeutics. Research on p-glycoprotein and other such chemotherapy efflux pumps is currently ongoing. Medications to inhibit the function of p-glycoprotein are undergoing investigation, but due to toxicities and interactions with anti-cancer drugs their development has been difficult. Another mechanism of resistance is gene amplification, a process in which multiple copies of a gene are produced by cancer cells. This overcomes the effect of drugs that reduce the expression of genes involved in replication. With more copies of the gene, the drug can not prevent all expression of the gene and therefore the cell can restore its proliferative ability. Cancer cells can also cause defects in the cellular pathways of apoptosis (programmed cell death). As most chemotherapy drugs kill cancer cells in this manner, defective apoptosis allows survival of these cells, making them resistant. Many chemotherapy drugs also cause DNA damage, which can be repaired by enzymes in the cell that carry out DNA repair. Upregulation of these genes can overcome the DNA damage and prevent the induction of apoptosis. Mutations in genes that produce drug target proteins, such as tubulin, can occur which prevent the drugs from binding to the protein, leading to resistance to these types of drugs. Drugs used in chemotherapy can induce cell stress, which can kill a cancer cell; however, under certain conditions, cells stress can induce changes in gene expression that enables resistance to several types of drugs. In lung cancer, the transcription factor NFκB is thought to play a role in resistance to chemotherapy, via inflammatory pathways.",
"title": "Resistance"
},
{
"paragraph_id": 77,
"text": "Targeted therapies are a relatively new class of cancer drugs that can overcome many of the issues seen with the use of cytotoxics. They are divided into two groups: small molecule and antibodies. The massive toxicity seen with the use of cytotoxics is due to the lack of cell specificity of the drugs. They will kill any rapidly dividing cell, tumor or normal. Targeted therapies are designed to affect cellular proteins or processes that are utilised by the cancer cells. This allows a high dose to cancer tissues with a relatively low dose to other tissues. Although the side effects are often less severe than that seen of cytotoxic chemotherapeutics, life-threatening effects can occur. Initially, the targeted therapeutics were supposed to be solely selective for one protein. Now it is clear that there is often a range of protein targets that the drug can bind. An example target for targeted therapy is the BCR-ABL1 protein produced from the Philadelphia chromosome, a genetic lesion found commonly in chronic myelogenous leukemia and in some patients with acute lymphoblastic leukemia. This fusion protein has enzyme activity that can be inhibited by imatinib, a small molecule drug.",
"title": "Cytotoxics and targeted therapies"
},
{
"paragraph_id": 78,
"text": "Cancer is the uncontrolled growth of cells coupled with malignant behaviour: invasion and metastasis (among other features). It is caused by the interaction between genetic susceptibility and environmental factors. These factors lead to accumulations of genetic mutations in oncogenes (genes that control the growth rate of cells) and tumor suppressor genes (genes that help to prevent cancer), which gives cancer cells their malignant characteristics, such as uncontrolled growth.",
"title": "Mechanism of action"
},
{
"paragraph_id": 79,
"text": "In the broad sense, most chemotherapeutic drugs work by impairing mitosis (cell division), effectively targeting fast-dividing cells. As these drugs cause damage to cells, they are termed cytotoxic. They prevent mitosis by various mechanisms including damaging DNA and inhibition of the cellular machinery involved in cell division. One theory as to why these drugs kill cancer cells is that they induce a programmed form of cell death known as apoptosis.",
"title": "Mechanism of action"
},
{
"paragraph_id": 80,
"text": "As chemotherapy affects cell division, tumors with high growth rates (such as acute myelogenous leukemia and the aggressive lymphomas, including Hodgkin's disease) are more sensitive to chemotherapy, as a larger proportion of the targeted cells are undergoing cell division at any time. Malignancies with slower growth rates, such as indolent lymphomas, tend to respond to chemotherapy much more modestly. Heterogeneic tumours may also display varying sensitivities to chemotherapy agents, depending on the subclonal populations within the tumor.",
"title": "Mechanism of action"
},
{
"paragraph_id": 81,
"text": "Cells from the immune system also make crucial contributions to the antitumor effects of chemotherapy. For example, the chemotherapeutic drugs oxaliplatin and cyclophosphamide can cause tumor cells to die in a way that is detectable by the immune system (called immunogenic cell death), which mobilizes immune cells with antitumor functions. Chemotherapeutic drugs that cause cancer immunogenic tumor cell death can make unresponsive tumors sensitive to immune checkpoint therapy.",
"title": "Mechanism of action"
},
{
"paragraph_id": 82,
"text": "Some chemotherapy drugs are used in diseases other than cancer, such as in autoimmune disorders, and noncancerous plasma cell dyscrasia. In some cases they are often used at lower doses, which means that the side effects are minimized, while in other cases doses similar to ones used to treat cancer are used. Methotrexate is used in the treatment of rheumatoid arthritis (RA), psoriasis, ankylosing spondylitis and multiple sclerosis. The anti-inflammatory response seen in RA is thought to be due to increases in adenosine, which causes immunosuppression; effects on immuno-regulatory cyclooxygenase-2 enzyme pathways; reduction in pro-inflammatory cytokines; and anti-proliferative properties. Although methotrexate is used to treat both multiple sclerosis and ankylosing spondylitis, its efficacy in these diseases is still uncertain. Cyclophosphamide is sometimes used to treat lupus nephritis, a common symptom of systemic lupus erythematosus. Dexamethasone along with either bortezomib or melphalan is commonly used as a treatment for AL amyloidosis. Recently, bortezomid in combination with cyclophosphamide and dexamethasone has also shown promise as a treatment for AL amyloidosis. Other drugs used to treat myeloma such as lenalidomide have shown promise in treating AL amyloidosis.",
"title": "Other uses"
},
{
"paragraph_id": 83,
"text": "Chemotherapy drugs are also used in conditioning regimens prior to bone marrow transplant (hematopoietic stem cell transplant). Conditioning regimens are used to suppress the recipient's immune system in order to allow a transplant to engraft. Cyclophosphamide is a common cytotoxic drug used in this manner and is often used in conjunction with total body irradiation. Chemotherapeutic drugs may be used at high doses to permanently remove the recipient's bone marrow cells (myeloablative conditioning) or at lower doses that will prevent permanent bone marrow loss (non-myeloablative and reduced intensity conditioning). When used in non-cancer setting, the treatment is still called \"chemotherapy\", and is often done in the same treatment centers used for people with cancer.",
"title": "Other uses"
},
{
"paragraph_id": 84,
"text": "In the 1970s, antineoplastic (chemotherapy) drugs were identified as hazardous, and the American Society of Health-System Pharmacists (ASHP) has since then introduced the concept of hazardous drugs after publishing a recommendation in 1983 regarding handling hazardous drugs. The adaptation of federal regulations came when the U.S. Occupational Safety and Health Administration (OSHA) first released its guidelines in 1986 and then updated them in 1996, 1999, and, most recently, 2006.",
"title": "Occupational exposure and safe handling"
},
{
"paragraph_id": 85,
"text": "The National Institute for Occupational Safety and Health (NIOSH) has been conducting an assessment in the workplace since then regarding these drugs. Occupational exposure to antineoplastic drugs has been linked to multiple health effects, including infertility and possible carcinogenic effects. A few cases have been reported by the NIOSH alert report, such as one in which a female pharmacist was diagnosed with papillary transitional cell carcinoma. Twelve years before the pharmacist was diagnosed with the condition, she had worked for 20 months in a hospital where she was responsible for preparing multiple antineoplastic drugs. The pharmacist didn't have any other risk factor for cancer, and therefore, her cancer was attributed to the exposure to the antineoplastic drugs, although a cause-and-effect relationship has not been established in the literature. Another case happened when a malfunction in biosafety cabinetry is believed to have exposed nursing personnel to antineoplastic drugs. Investigations revealed evidence of genotoxic biomarkers two and nine months after that exposure.",
"title": "Occupational exposure and safe handling"
},
{
"paragraph_id": 86,
"text": "Antineoplastic drugs are usually given through intravenous, intramuscular, intrathecal, or subcutaneous administration. In most cases, before the medication is administered to the patient, it needs to be prepared and handled by several workers. Any worker who is involved in handling, preparing, or administering the drugs, or with cleaning objects that have come into contact with antineoplastic drugs, is potentially exposed to hazardous drugs. Health care workers are exposed to drugs in different circumstances, such as when pharmacists and pharmacy technicians prepare and handle antineoplastic drugs and when nurses and physicians administer the drugs to patients. Additionally, those who are responsible for disposing antineoplastic drugs in health care facilities are also at risk of exposure.",
"title": "Occupational exposure and safe handling"
},
{
"paragraph_id": 87,
"text": "Dermal exposure is thought to be the main route of exposure due to the fact that significant amounts of the antineoplastic agents have been found in the gloves worn by healthcare workers who prepare, handle, and administer the agents. Another noteworthy route of exposure is inhalation of the drugs' vapors. Multiple studies have investigated inhalation as a route of exposure, and although air sampling has not shown any dangerous levels, it is still a potential route of exposure. Ingestion by hand to mouth is a route of exposure that is less likely compared to others because of the enforced hygienic standard in the health institutions. However, it is still a potential route, especially in the workplace, outside of a health institute. One can also be exposed to these hazardous drugs through injection by needle sticks. Research conducted in this area has established that occupational exposure occurs by examining evidence in multiple urine samples from health care workers.",
"title": "Occupational exposure and safe handling"
},
{
"paragraph_id": 88,
"text": "Hazardous drugs expose health care workers to serious health risks. Many studies show that antineoplastic drugs could have many side effects on the reproductive system, such as fetal loss, congenital malformation, and infertility. Health care workers who are exposed to antineoplastic drugs on many occasions have adverse reproductive outcomes such as spontaneous abortions, stillbirths, and congenital malformations. Moreover, studies have shown that exposure to these drugs leads to menstrual cycle irregularities. Antineoplastic drugs may also increase the risk of learning disabilities among children of health care workers who are exposed to these hazardous substances.",
"title": "Occupational exposure and safe handling"
},
{
"paragraph_id": 89,
"text": "Moreover, these drugs have carcinogenic effects. In the past five decades, multiple studies have shown the carcinogenic effects of exposure to antineoplastic drugs. Similarly, there have been research studies that linked alkylating agents with humans developing leukemias. Studies have reported elevated risk of breast cancer, nonmelanoma skin cancer, and cancer of the rectum among nurses who are exposed to these drugs. Other investigations revealed that there is a potential genotoxic effect from anti-neoplastic drugs to workers in health care settings.",
"title": "Occupational exposure and safe handling"
},
{
"paragraph_id": 90,
"text": "As of 2018, there were no occupational exposure limits set for antineoplastic drugs, i.e., OSHA or the American Conference of Governmental Industrial Hygienists (ACGIH) have not set workplace safety guidelines.",
"title": "Occupational exposure and safe handling"
},
{
"paragraph_id": 91,
"text": "NIOSH recommends using a ventilated cabinet that is designed to decrease worker exposure. Additionally, it recommends training of all staff, the use of cabinets, implementing an initial evaluation of the technique of the safety program, and wearing protective gloves and gowns when opening drug packaging, handling vials, or labeling. When wearing personal protective equipment, one should inspect gloves for physical defects before use and always wear double gloves and protective gowns. Health care workers are also required to wash their hands with water and soap before and after working with antineoplastic drugs, change gloves every 30 minutes or whenever punctured, and discard them immediately in a chemotherapy waste container.",
"title": "Occupational exposure and safe handling"
},
{
"paragraph_id": 92,
"text": "The gowns used should be disposable gowns made of polyethylene-coated polypropylene. When wearing gowns, individuals should make sure that the gowns are closed and have long sleeves. When preparation is done, the final product should be completely sealed in a plastic bag.",
"title": "Occupational exposure and safe handling"
},
{
"paragraph_id": 93,
"text": "The health care worker should also wipe all waste containers inside the ventilated cabinet before removing them from the cabinet. Finally, workers should remove all protective wear and put them in a bag for their disposal inside the ventilated cabinet.",
"title": "Occupational exposure and safe handling"
},
{
"paragraph_id": 94,
"text": "Drugs should only be administered using protective medical devices such as needle lists and closed systems and techniques such as priming of IV tubing by pharmacy personnel inside a ventilated cabinet. Workers should always wear personal protective equipment such as double gloves, goggles, and protective gowns when opening the outer bag and assembling the delivery system to deliver the drug to the patient, and when disposing of all material used in the administration of the drugs.",
"title": "Occupational exposure and safe handling"
},
{
"paragraph_id": 95,
"text": "Hospital workers should never remove tubing from an IV bag that contains an antineoplastic drug, and when disconnecting the tubing in the system, they should make sure the tubing has been thoroughly flushed. After removing the IV bag, the workers should place it together with other disposable items directly in the yellow chemotherapy waste container with the lid closed. Protective equipment should be removed and put into a disposable chemotherapy waste container. After this has been done, one should double bag the chemotherapy waste before or after removing one's inner gloves. Moreover, one must always wash one's hands with soap and water before leaving the drug administration site.",
"title": "Occupational exposure and safe handling"
},
{
"paragraph_id": 96,
"text": "All employees whose jobs in health care facilities expose them to hazardous drugs must receive training. Training should include shipping and receiving personnel, housekeepers, pharmacists, assistants, and all individuals involved in the transportation and storage of antineoplastic drugs. These individuals should receive information and training to inform them of the hazards of the drugs present in their areas of work. They should be informed and trained on operations and procedures in their work areas where they can encounter hazards, different methods used to detect the presence of hazardous drugs and how the hazards are released, and the physical and health hazards of the drugs, including their reproductive and carcinogenic hazard potential. Additionally, they should be informed and trained on the measures they should take to avoid and protect themselves from these hazards. This information ought to be provided when health care workers come into contact with the drugs, that is, perform the initial assignment in a work area with hazardous drugs. Moreover, training should also be provided when new hazards emerge as well as when new drugs, procedures, or equipment are introduced.",
"title": "Occupational exposure and safe handling"
},
{
"paragraph_id": 97,
"text": "When performing cleaning and decontaminating the work area where antineoplastic drugs are used, one should make sure that there is sufficient ventilation to prevent the buildup of airborne drug concentrations. When cleaning the work surface, hospital workers should use deactivation and cleaning agents before and after each activity as well as at the end of their shifts. Cleaning should always be done using double protective gloves and disposable gowns. After employees finish up cleaning, they should dispose of the items used in the activity in a yellow chemotherapy waste container while still wearing protective gloves. After removing the gloves, they should thoroughly wash their hands with soap and water. Anything that comes into contact or has a trace of the antineoplastic drugs, such as needles, empty vials, syringes, gowns, and gloves, should be put in the chemotherapy waste container.",
"title": "Occupational exposure and safe handling"
},
{
"paragraph_id": 98,
"text": "A written policy needs to be in place in case of a spill of antineoplastic products. The policy should address the possibility of various sizes of spills as well as the procedure and personal protective equipment required for each size. A trained worker should handle a large spill and always dispose of all cleanup materials in the chemical waste container according to EPA regulations, not in a yellow chemotherapy waste container.",
"title": "Occupational exposure and safe handling"
},
{
"paragraph_id": 99,
"text": "A medical surveillance program must be established. In case of exposure, occupational health professionals need to ask for a detailed history and do a thorough physical exam. They should test the urine of the potentially exposed worker by doing a urine dipstick or microscopic examination, mainly looking for blood, as several antineoplastic drugs are known to cause bladder damage.",
"title": "Occupational exposure and safe handling"
},
{
"paragraph_id": 100,
"text": "Urinary mutagenicity is a marker of exposure to antineoplastic drugs that was first used by Falck and colleagues in 1979 and uses bacterial mutagenicity assays. Apart from being nonspecific, the test can be influenced by extraneous factors such as dietary intake and smoking and is, therefore, used sparingly. However, the test played a significant role in changing the use of horizontal flow cabinets to vertical flow biological safety cabinets during the preparation of antineoplastic drugs because the former exposed health care workers to high levels of drugs. This changed the handling of drugs and effectively reduced workers' exposure to antineoplastic drugs.",
"title": "Occupational exposure and safe handling"
},
{
"paragraph_id": 101,
"text": "Biomarkers of exposure to antineoplastic drugs commonly include urinary platinum, methotrexate, urinary cyclophosphamide and ifosfamide, and urinary metabolite of 5-fluorouracil. In addition to this, there are other drugs used to measure the drugs directly in the urine, although they are rarely used. A measurement of these drugs directly in one's urine is a sign of high exposure levels and that an uptake of the drugs is happening either through inhalation or dermally.",
"title": "Occupational exposure and safe handling"
},
{
"paragraph_id": 102,
"text": "There is an extensive list of antineoplastic agents. Several classification schemes have been used to subdivide the medicines used for cancer into several different types.",
"title": "Available agents"
},
{
"paragraph_id": 103,
"text": "The first use of small-molecule drugs to treat cancer was in the early 20th century, although the specific chemicals first used were not originally intended for that purpose. Mustard gas was used as a chemical warfare agent during World War I and was discovered to be a potent suppressor of hematopoiesis (blood production). A similar family of compounds known as nitrogen mustards were studied further during World War II at the Yale School of Medicine. It was reasoned that an agent that damaged the rapidly growing white blood cells might have a similar effect on cancer. Therefore, in December 1942, several people with advanced lymphomas (cancers of the lymphatic system and lymph nodes) were given the drug by vein, rather than by breathing the irritating gas. Their improvement, although temporary, was remarkable. Concurrently, during a military operation in World War II, following a German air raid on the Italian harbour of Bari, several hundred people were accidentally exposed to mustard gas, which had been transported there by the Allied forces to prepare for possible retaliation in the event of German use of chemical warfare. The survivors were later found to have very low white blood cell counts. After WWII was over and the reports declassified, the experiences converged and led researchers to look for other substances that might have similar effects against cancer. The first chemotherapy drug to be developed from this line of research was mustine. Since then, many other drugs have been developed to treat cancer, and drug development has exploded into a multibillion-dollar industry, although the principles and limitations of chemotherapy discovered by the early researchers still apply.",
"title": "History"
},
{
"paragraph_id": 104,
"text": "The word chemotherapy without a modifier usually refers to cancer treatment, but its historical meaning was broader. The term was coined in the early 1900s by Paul Ehrlich as meaning any use of chemicals to treat any disease (chemo- + -therapy), such as the use of antibiotics (antibacterial chemotherapy). Ehrlich was not optimistic that effective chemotherapy drugs would be found for the treatment of cancer. The first modern chemotherapeutic agent was arsphenamine, an arsenic compound discovered in 1907 and used to treat syphilis. This was later followed by sulfonamides (sulfa drugs) and penicillin. In today's usage, the sense \"any treatment of disease with drugs\" is often expressed with the word pharmacotherapy. In terms of metaphorical language, 'chemotherapy' can be paralleled with the idea of a 'storm', as both can cause distress but afterwards may have a healing/cleaning effect.",
"title": "History"
},
{
"paragraph_id": 105,
"text": "The top 10 best-selling (in terms of revenue) cancer drugs of 2013:",
"title": "Sales"
},
{
"paragraph_id": 106,
"text": "Specially targeted delivery vehicles aim to increase effective levels of chemotherapy for tumor cells while reducing effective levels for other cells. This should result in an increased tumor kill or reduced toxicity or both.",
"title": "Research"
},
{
"paragraph_id": 107,
"text": "Antibody-drug conjugates (ADCs) comprise an antibody, drug and a linker between them. The antibody will be targeted at a preferentially expressed protein in the tumour cells (known as a tumor antigen) or on cells that the tumor can utilise, such as blood vessel endothelial cells. They bind to the tumor antigen and are internalised, where the linker releases the drug into the cell. These specially targeted delivery vehicles vary in their stability, selectivity, and choice of target, but, in essence, they all aim to increase the maximum effective dose that can be delivered to the tumor cells. Reduced systemic toxicity means that they can also be used in people who are sicker and that they can carry new chemotherapeutic agents that would have been far too toxic to deliver via traditional systemic approaches.",
"title": "Research"
},
{
"paragraph_id": 108,
"text": "The first approved drug of this type was gemtuzumab ozogamicin (Mylotarg), released by Wyeth (now Pfizer). The drug was approved to treat acute myeloid leukemia. Two other drugs, trastuzumab emtansine and brentuximab vedotin, are both in late clinical trials, and the latter has been granted accelerated approval for the treatment of refractory Hodgkin's lymphoma and systemic anaplastic large cell lymphoma.",
"title": "Research"
},
{
"paragraph_id": 109,
"text": "Nanoparticles are 1–1000 nanometer (nm) sized particles that can promote tumor selectivity and aid in delivering low-solubility drugs. Nanoparticles can be targeted passively or actively. Passive targeting exploits the difference between tumor blood vessels and normal blood vessels. Blood vessels in tumors are \"leaky\" because they have gaps from 200 to 2000 nm, which allow nanoparticles to escape into the tumor. Active targeting uses biological molecules (antibodies, proteins, DNA and receptor ligands) to preferentially target the nanoparticles to the tumor cells. There are many types of nanoparticle delivery systems, such as silica, polymers, liposomes and magnetic particles. Nanoparticles made of magnetic material can also be used to concentrate agents at tumor sites using an externally applied magnetic field. They have emerged as a useful vehicle in magnetic drug delivery for poorly soluble agents such as paclitaxel.",
"title": "Research"
},
{
"paragraph_id": 110,
"text": "Electrochemotherapy is the combined treatment in which injection of a chemotherapeutic drug is followed by application of high-voltage electric pulses locally to the tumor. The treatment enables the chemotherapeutic drugs, which otherwise cannot or hardly go through the membrane of cells (such as bleomycin and cisplatin), to enter the cancer cells. Hence, greater effectiveness of antitumor treatment is achieved.",
"title": "Research"
},
{
"paragraph_id": 111,
"text": "Clinical electrochemotherapy has been successfully used for treatment of cutaneous and subcutaneous tumors irrespective of their histological origin. The method has been reported as safe, simple and highly effective in all reports on clinical use of electrochemotherapy. According to the ESOPE project (European Standard Operating Procedures of Electrochemotherapy), the Standard Operating Procedures (SOP) for electrochemotherapy were prepared, based on the experience of the leading European cancer centres on electrochemotherapy. Recently, new electrochemotherapy modalities have been developed for treatment of internal tumors using surgical procedures, endoscopic routes or percutaneous approaches to gain access to the treatment area.",
"title": "Research"
},
{
"paragraph_id": 112,
"text": "Hyperthermia therapy is heat treatment for cancer that can be a powerful tool when used in combination with chemotherapy (thermochemotherapy) or radiation for the control of a variety of cancers. The heat can be applied locally to the tumor site, which will dilate blood vessels to the tumor, allowing more chemotherapeutic medication to enter the tumor. Additionally, the tumor cell membrane will become more porous, further allowing more of the chemotherapeutic medicine to enter the tumor cell.",
"title": "Research"
},
{
"paragraph_id": 113,
"text": "Hyperthermia has also been shown to help prevent or reverse \"chemo-resistance.\" Chemotherapy resistance sometimes develops over time as the tumors adapt and can overcome the toxicity of the chemo medication. \"Overcoming chemoresistance has been extensively studied within the past, especially using CDDP-resistant cells. In regard to the potential benefit that drug-resistant cells can be recruited for effective therapy by combining chemotherapy with hyperthermia, it was important to show that chemoresistance against several anticancer drugs (e.g. mitomycin C, anthracyclines, BCNU, melphalan) including CDDP could be reversed at least partially by the addition of heat.",
"title": "Research"
},
{
"paragraph_id": 114,
"text": "Chemotherapy is used in veterinary medicine similar to how it is used in human medicine.",
"title": "Other animals"
}
] | Chemotherapy is a type of cancer treatment that uses one or more anti-cancer drugs as part of a standardized chemotherapy regimen. Chemotherapy may be given with a curative intent or it may aim to prolong life or to reduce symptoms. Chemotherapy is one of the major categories of the medical discipline specifically devoted to pharmacotherapy for cancer, which is called medical oncology. The term chemotherapy has come to connote non-specific usage of intracellular poisons to inhibit mitosis or induce DNA damage, which is why inhibition of DNA repair can augment chemotherapy. The connotation of the word chemotherapy excludes more selective agents that block extracellular signals. The development of therapies with specific molecular or genetic targets, which inhibit growth-promoting signals from classic endocrine hormones are now called hormonal therapies. By contrast, other inhibitions of growth-signals like those associated with receptor tyrosine kinases are referred to as targeted therapy. Importantly, the use of drugs constitutes systemic therapy for cancer in that they are introduced into the blood stream and are therefore in principle able to address cancer at any anatomic location in the body. Systemic therapy is often used in conjunction with other modalities that constitute local therapy for cancer such as radiation therapy, surgery or hyperthermia therapy. Traditional chemotherapeutic agents are cytotoxic by means of interfering with cell division (mitosis) but cancer cells vary widely in their susceptibility to these agents. To a large extent, chemotherapy can be thought of as a way to damage or stress cells, which may then lead to cell death if apoptosis is initiated. Many of the side effects of chemotherapy can be traced to damage to normal cells that divide rapidly and are thus sensitive to anti-mitotic drugs: cells in the bone marrow, digestive tract and hair follicles. This results in the most common side-effects of chemotherapy: myelosuppression, mucositis, and alopecia. Because of the effect on immune cells, chemotherapy drugs often find use in a host of diseases that result from harmful overactivity of the immune system against self. These include rheumatoid arthritis, systemic lupus erythematosus, multiple sclerosis, vasculitis and many others. | 2001-11-19T18:11:28Z | 2023-12-28T21:31:00Z | [
"Template:Further",
"Template:Manual",
"Template:Tumors",
"Template:Major Drug Groups",
"Template:Redirect2",
"Template:Use dmy dates",
"Template:Main",
"Template:Citation needed",
"Template:Infobox medical intervention (new)",
"Template:TOC limit",
"Template:Page needed",
"Template:Multiple image",
"Template:Cite web",
"Template:Columns-list",
"Template:Reflist",
"Template:Cite journal",
"Template:Cite book",
"Template:Short description",
"Template:About",
"Template:Rp",
"Template:Cn",
"Template:Citation",
"Template:Sisterlinks",
"Template:Authority control"
] | https://en.wikipedia.org/wiki/Chemotherapy |
7,174 | Chinese historiography | Chinese historiography is the study of the techniques and sources used by historians to develop the recorded history of China.
The recording of events in Chinese history dates back to the Shang dynasty (c. 1600–1046 BC). Many written examples survive of ceremonial inscriptions, divinations and records of family names, which were carved or painted onto tortoise shell or bones. The uniformly religious context of Shang written records makes avoidance of preservation bias important when interpreting Shang history. The first conscious attempt to record history in China may have been the inscription on the Zhou dynasty bronze Shi Qiang pan. This and thousands of other Chinese bronze inscriptions form our primary sources for the period in which they were interred in elite burials.
The oldest surviving history texts of China were compiled in the Book of Documents (Shujing). The Spring and Autumn Annals (Chunqiu), the official chronicle of the State of Lu, cover the period from 722 to 481 BC and are among the earliest surviving Chinese historical texts to be arranged as annals. The compilations of both of these works are traditionally ascribed to Confucius. The Zuo zhuan, attributed to Zuo Qiuming in the 5th century BC, is the earliest Chinese work of narrative history and covers the period from 722 to 468 BC. The anonymous Zhan Guo Ce was a renowned ancient Chinese historical work composed of sporadic materials on the Warring States period between the 3rd and 1st centuries BC.
The first systematic Chinese historical text, the Records of the Grand Historian (Shiji), was written by Sima Qian (c. 145 or 135–86 BC) based on work by his father, Sima Tan, during the Han Dynasty. It covers the period from the time of the Yellow Emperor until the author's own lifetime. Two instances of systematic book-burning and a palace fire in the preceding centuries narrowed the sources available for this work. Because of this highly praised and frequently copied work, Sima Qian is often regarded as the father of Chinese historiography. The Twenty-Four Histories, the official histories of the dynasties considered legitimate by imperial Chinese historians, all copied Sima Qian's format. Typically, rulers initiating a new dynasty would employ scholars to compile a final history from the records of the previous one, using a broad variety of sources.
Around the turn of the millennium, father–son imperial librarians Liu Xiang and Liu Xin edited and catalogued a large number of early texts, including each individual text listed by name above. Much transmitted literature surviving today is known to be ultimately the version they edited down from a larger volume of material available at the time. In 190, the imperial capital was again destroyed by arson, causing the loss of significant amounts of historical material.
The Shitong was the first Chinese work about historiography. It was compiled by Liu Zhiji between 708 and 710 AD. The book describes the general pattern of the official dynastic histories with regard to the structure, method, arrangement, sequence, caption, and commentary, dating back to the Warring States period.
The Zizhi Tongjian was a pioneering reference work of Chinese historiography. Emperor Yingzong of Song ordered Sima Guang and other scholars to begin compiling this universal history of China in 1065, and they presented it to his successor Shenzong in 1084. It contains 294 volumes and about three million characters, and it narrates the history of China from 403 BC to the beginning of the Song dynasty in 959. This style broke the nearly thousand-year tradition of Sima Qian, which employed annals for imperial reigns but biographies or treatises for other topics. The more consistent style of the Zizhi Tongjian was not followed by later official histories. In the mid 13th century, Ouyang Xiu was heavily influenced by the work of Xue Juzheng. This led to the creation of the New History of the Five Dynasties, which covered five dynasties in over 70 chapters.
Toward the end of the Qing dynasty in the early 20th century, scholars looked to Japan and the West for models. In the late 1890s, although deeply learned in the traditional forms, Liang Qichao began to publish extensive and influential studies and polemics that converted young readers to a new type of historiography that Liang regarded as more scientific. Liu Yizheng published several specialized history works including History of Chinese Culture. This next generation became professional historians, training and teaching in universities. They included Chang Chi-yun, Gu Jiegang, Fu Sinian, and Tsiang Tingfu, who were PhDs from Columbia University; and Chen Yinke, who conducted his investigations into medieval Chinese history in both Europe and the United States. Other historians, such as Qian Mu, who was trained largely through independent study, were more conservative but remained innovative in their response to world trends. In the 1920s, wide-ranging scholars, such as Guo Moruo, adapted Marxism in order to portray China as a nation among nations, rather than having an exotic and isolated history. The ensuing years saw historians such as Wu Han master both Western theories, including Marxism, and Chinese learning.
Like the three ages of the Greek poet Hesiod, the oldest Chinese historiography viewed mankind as living in a fallen age of depravity, cut off from the virtues of the past, as Confucius and his disciples revered the sage kings Emperor Yao and Emperor Shun.
Unlike Hesiod's system, however, the Duke of Zhou's idea of the Mandate of Heaven as a rationale for dethroning the supposedly divine Zi clan led subsequent historians to see man's fall as a cyclical pattern. In this view, a new dynasty is founded by a morally upright founder, but his successors cannot help but become increasingly corrupt and dissolute. This immorality removes the dynasty's divine favor and is manifested by natural disasters (particularly floods), rebellions, and foreign invasions. Eventually, the dynasty becomes weak enough to be replaced by a new one, whose founder is able to rectify many of society's problems and begin the cycle anew. Over time, many people felt a full correction was not possible, and that the golden age of Yao and Shun could not be attained.
This teleological theory implies that there can be only one rightful sovereign under heaven at a time. Thus, despite the fact that Chinese history has had many lengthy and contentious periods of disunity, a great effort was made by official historians to establish a legitimate precursor whose fall allowed a new dynasty to acquire its mandate. Similarly, regardless of the particular merits of individual emperors, founders would be portrayed in more laudatory terms, and the last ruler of a dynasty would always be castigated as depraved and unworthy – even when that was not the case. Such a narrative was employed after the fall of the empire by those compiling the history of the Qing, and by those who justified the attempted restorations of the imperial system by Yuan Shikai and Zhang Xun.
As early as the 1930s, the American scholar Owen Lattimore argued that China was the product of the interaction of farming and pastoral societies, rather than simply the expansion of the Han people. Lattimore did not accept the more extreme Sino-Babylonian theories that the essential elements of early Chinese technology and religion had come from Western Asia, but he was among the scholars to argue against the assumption they had all been indigenous.
Both the Republic of China and the People's Republic of China hold the view that Chinese history should include all the ethnic groups of the lands held by the Qing dynasty during its territorial peak, with these ethnicities forming part of the Zhonghua minzu (Chinese nation). This view is in contrast with Han chauvinism promoted by the Qing-era Tongmenghui. This expanded view encompasses internal and external tributary lands, as well as conquest dynasties in the history of a China seen as a coherent multi-ethnic nation since time immemorial, incorporating and accepting the contributions and cultures of non-Han ethnicities.
The acceptance of this view by ethnic minorities sometimes depends on their views on present-day issues. The 14th Dalai Lama, long insistent on Tibet's history being separate from that of China, conceded in 2005 that Tibet "is a part of" China's "5,000-year history" as part of a new proposal for Tibetan autonomy. Korean nationalists have virulently reacted against China's application to UNESCO for recognition of the Goguryeo tombs in Chinese territory. The absolute independence of Goguryeo is a central aspect of Korean identity, because, according to Korean legend, Goguryeo was independent of China and Japan, compared to subordinate states such as the Joseon dynasty and the Korean Empire. The legacy of Genghis Khan has been contested between China, Mongolia, and Russia, all three states having significant numbers of ethnic Mongols within their borders and holding territory that was conquered by the Khan.
The Jin dynasty tradition of a new dynasty composing the official history for its preceding dynasty/dynasties has been seen to foster an ethnically inclusive interpretation of Chinese history. The compilation of official histories usually involved monumental intellectual labor. The Yuan and Qing dynasties, ruled by the Mongols and Manchus, faithfully carried out this practice, composing the official Chinese-language histories of the Han-ruled Song and Ming dynasties, respectively.
Recent Western scholars have reacted against the ethnically inclusive narrative in Communist-sponsored history, by writing revisionist histories of China such as the New Qing History that feature, according to James A. Millward, "a degree of 'partisanship' for the indigenous underdogs of frontier history". Scholarly interest in writing about Chinese minorities from non-Chinese perspectives is growing. So too is the rejection of a unified cultural narrative in early China. Historians engaging with archaeological progress find increasingly demonstrated a rich amalgam of diverse cultures in regions the received literature positions as homogeneous.
Most Chinese history that is published in the People's Republic of China is based on a Marxist interpretation of history. These theories were first applied in the 1920s by Chinese scholars such as Guo Moruo, and became orthodoxy in academic study after 1949. The Marxist view of history is that history is governed by universal laws and that according to these laws, a society moves through a series of stages, with the transition between stages being driven by class struggle. These stages are:
The official historical view within the People's Republic of China associates each of these stages with a particular era in Chinese history.
Because of the strength of the Chinese Communist Party and the importance of the Marxist interpretation of history in legitimizing its rule, it was for many years difficult for historians within the PRC to actively argue in favor of non-Marxist and anti-Marxist interpretations of history. However, this political restriction is less confining than it may first appear in that the Marxist historical framework is surprisingly flexible, and it is a rather simple matter to modify an alternative historical theory to use language that at least does not challenge the Marxist interpretation of history.
Partly because of the interest of Mao Zedong, historians in the 1950s took a special interest in the role of peasant rebellions in Chinese history and compiled documentary histories to examine them.
There are several problems associated with imposing Marx's European-based framework on Chinese history. First, slavery existed throughout China's history but never as the primary form of labor. While the Zhou and earlier dynasties may be labeled as feudal, later dynasties were much more centralized than how Marx analyzed their European counterparts as being. To account for the discrepancy, Chinese Marxists invented the term "bureaucratic feudalism". The placement of the Tang as the beginning of the bureaucratic phase rests largely on the replacement of patronage networks with the imperial examination. Some world-systems analysts, such as Janet Abu-Lughod, claim that analysis of Kondratiev waves shows that capitalism first arose in Song dynasty China, although widespread trade was subsequently disrupted and then curtailed.
The Japanese scholar Tanigawa Michio, writing in the 1970s and 1980s, set out to revise the generally Marxist views of China prevalent in post-war Japan. Tanigawa writes that historians in Japan fell into two schools. One held that China followed the set European pattern which Marxists thought to be universal; that is, from ancient slavery to medieval feudalism to modern capitalism; while another group argued that "Chinese society was extraordinarily saturated with stagnancy, as compared to the West" and assumed that China existed in a "qualitatively different historical world from Western society". That is, there is an argument between those who see "unilinear, monistic world history" and those who conceive of a "two-tracked or multi-tracked world history". Tanigawa reviewed the applications of these theories in Japanese writings about Chinese history and then tested them by analyzing the Six Dynasties 220–589 CE period, which Marxist historians saw as feudal. His conclusion was that China did not have feudalism in the sense that Marxists use, that Chinese military governments did not lead to a European-style military aristocracy. The period established social and political patterns which shaped China's history from that point on.
There was a gradual relaxation of Marxist interpretation after the death of Mao Zedong in 1976, which was accelerated after the Tian'anmen Square protest and other revolutions in 1989, which damaged Marxism's ideological legitimacy in the eyes of Chinese academics.
This view of Chinese history sees Chinese society as a traditional society needing to become modern, usually with the implicit assumption of Western society as the model. Such a view was common amongst European and American historians during the 19th and early 20th centuries, but is now criticized for being a Eurocentric viewpoint, since such a view permits an implicit justification for breaking the society from its static past and bringing it into the modern world under European direction.
By the mid-20th century, it was increasingly clear to historians that the notion of "changeless China" was untenable. A new concept, popularized by John Fairbank, was the notion of "change within tradition", which argued that China did change in the pre-modern period but that this change existed within certain cultural traditions. This notion has also been subject to the criticism that to say "China has not changed fundamentally" is tautological, since it requires that one look for things that have not changed and then arbitrarily define those as fundamental.
Nonetheless, studies seeing China's interaction with Europe as the driving force behind its recent history are still common. Such studies may consider the First Opium War as the starting point for China's modern period. Examples include the works of H.B. Morse, who wrote chronicles of China's international relations such as Trade and Relations of the Chinese Empire. The Chinese convention is to use the word jindai ("modern") to refer to a timeframe for modernity which begins with the Opium wars and continues through the May Fourth period.
In the 1950s, several of Fairbank's students argued that Confucianism was incompatible with modernity. Joseph Levenson and Mary C. Wright, and Albert Feuerwerker argued in effect that traditional Chinese values were a barrier to modernity and would have to be abandoned before China could make progress. Wright concluded, "The failure of the T'ung-chih [Tongzhi] Restoration demonstrated with a rare clarity that even in the most favorable circumstances there is no way in which an effective modern state can be grafted onto a Confucian society. Yet in the decades that followed, the political ideas that had been tested and, for all their grandeur, found wanting, were never given a decent burial."
In a different view of modernization, the Japanese historian Naito Torajiro argued that China reached modernity during its mid-Imperial period, centuries before Europe. He believed that the reform of the civil service into a meritocratic system and the disappearance of the ancient Chinese nobility from the bureaucracy constituted a modern society. The problem associated with this approach is the subjective meaning of modernity. The Chinese nobility had been in decline since the Qin dynasty, and while the exams were largely meritocratic, performance required time and resources that meant examinees were still typically from the gentry. Moreover, expertise in the Confucian classics did not guarantee competent bureaucrats when it came to managing public works or preparing a budget. Confucian hostility to commerce placed merchants at the bottom of the four occupations, itself an archaism maintained by devotion to classic texts. The social goal continued to be to invest in land and enter the gentry, ideas more like those of the physiocrats than those of Adam Smith.
With ideas derived from Marx and Max Weber, Karl August Wittfogel argued that bureaucracy arose to manage irrigation systems. Despotism was needed to force the people into building canals, dikes, and waterways to increase agriculture. Yu the Great, one of China's legendary founders, is known for his control of the floods of the Yellow River. The hydraulic empire produces wealth from its stability; while dynasties may change, the structure remains intact until destroyed by modern powers. In Europe abundant rainfall meant less dependence on irrigation. In the Orient natural conditions were such that the bulk of the land could not be cultivated without large-scale irrigation works. As only a centralized administration could organize the building and maintenance of large-scale systems of irrigation, the need for such systems made bureaucratic despotism inevitable in Oriental lands.
When Wittfogel published his Oriental Despotism: A Comparative Study of Total Power, critics pointed out that water management was given the high status China accorded to officials concerned with taxes, rituals, or fighting off bandits. The theory also has a strong orientalist bent, regarding all Asian states as generally the same while finding reasons for European polities not fitting the pattern.
While Wittfogel's theories were not popular among Marxist historians in China, the economist Chi Ch'ao-ting used them in his influential 1936 book, Key Economic Areas in Chinese History, as Revealed in the Development of Public Works for Water-Control. The book identified key areas of grain production which, when controlled by a strong political power, permitted that power to dominate the rest of the country and enforce periods of stability.
Convergence theory, including Hu Shih and Ray Huang's involution theory, holds that the past 150 years have been a period in which Chinese and Western civilization have been in the process of converging into a world civilization. Such a view is heavily influenced by modernization theory but, in China's case, it is also strongly influenced by indigenous sources such as the notion of Shijie Datong or "Great Unity". It has tended to be less popular among more recent historians, as postmodern Western historians discount overarching narratives, and nationalist Chinese historians feel similar about narratives failing to account for some special or unique characteristics of Chinese culture.
Closely related are colonial and anti-imperialist narratives. These often merge or are part of Marxist critiques from within China or the former Soviet Union, or are postmodern critiques such as Edward Said's Orientalism, which fault traditional scholarship for trying to fit West, South, and East Asia's histories into European categories unsuited to them. With regard to China particularly, T.F. Tsiang and John Fairbank used newly opened archives in the 1930s to write modern history from a Chinese point of view. Fairbank and Teng Ssu-yu then edited the influential volume China's Response to the West (1953). This approach was attacked for ascribing the change in China to outside forces. In the 1980s, Paul Cohen, a student of Fairbank's, issued a call for a more "China-Centered history of China".
The schools of thought on the 1911 Revolution have evolved from the early years of the Republic. The Marxist view saw the events of 1911 as a bourgeois revolution. In the 1920s, the Nationalist Party issued a theory of three political stages based on Sun Yatsen's writings:
The most obvious criticism is the near-identical nature of "political tutelage" and of a "constitutional democracy" consisting only of the one-party rule until the 1990s. Against this, Chen Shui-bian proposed his own four-stage theory.
Postmodern interpretations of Chinese history tend to reject narrative history and instead focus on a small subset of Chinese history, particularly the daily lives of ordinary people in particular locations or settings.
Zooming out from the dynastic cycle but maintaining focus on power dynamics, the following general periodization, based on the most powerful groups and the ways that power is used, has been proposed for Chinese history:
From the beginning of Communist rule in 1949 until the 1980s, Chinese historical scholarship focused largely on the officially sanctioned Marxist theory of class struggle. From the time of Deng Xiaoping (1978–1992) on, there has been a drift towards a Marxist-inspired Chinese nationalist perspective, and consideration of China's contemporary international status has become of paramount importance in historical studies. The current focus tends to be on specifics of civilization in ancient China, and the general paradigm of how China has responded to the dual challenges of interactions with the outside world and modernization in the post-1700 era. Long abandoned as a research focus among most Western scholars due to postmodernism's influence, this remains the primary interest for most historians inside China.
The late 20th century and early 21st century have seen numerous studies of Chinese history that challenge traditional paradigms. The field is rapidly evolving, with much new scholarship, often based on the realization that there is much about Chinese history that is unknown or controversial. For example, an active topic concerns whether the typical Chinese peasant in 1900 was seeing his life improve. In addition to the realization that there are major gaps in our knowledge of Chinese history is the equal realization that there are tremendous quantities of primary source material that have not yet been analyzed. Scholars are using previously overlooked documentary evidence, such as masses of government and family archives, and economic records such as census tax rolls, price records, and land surveys. In addition, artifacts such as vernacular novels, how-to manuals, and children's books are analyzed for clues about day-to-day life.
Recent Western scholarship of China has been heavily influenced by postmodernism, and has questioned modernist narratives of China's backwardness and lack of development. The desire to challenge the preconception that 19th-century China was weak, for instance, has led to a scholarly interest in Qing expansion into Central Asia. Postmodern scholarship largely rejects grand narratives altogether, preferring to publish empirical studies on the socioeconomics, and political or cultural dynamics, of smaller communities within China.
As of at least 2023, there has been a surge of historical writing about key leaders of the Nationalist period. A significant amount of new writing includes texts written for a general (as opposed to only academic) audience. There has been an increasingly nuanced portrayal of Chiang Kai-shek, particularly in more favorably evaluating his leadership during the War of Resistance against Japan and highlighting his position as one of the Big Four allied leaders. Recently released archival sources on the Nationalist era, including the Chiang Kai-shek diaries at Stanford University's Hoover Institution, have contributed to a surge in academic publishing on the period.
In China, historical scholarship remains largely nationalist and modernist or even traditionalist. The legacies of the modernist school (such as Lo Hsiang-lin) and the traditionalist school (such as Qian Mu (Chien Mu)) remain strong in Chinese circles. The more modernist works focus on imperial systems in China and employ the scientific method to analyze epochs of Chinese dynasties from geographical, genealogical, and cultural artifacts. For example, using Carbon-14 dating and geographical records to correlate climates with cycles of calm and calamity in Chinese history. The traditionalist school of scholarship resorts to official imperial records and colloquial historical works, and analyzes the rise and fall of dynasties using Confucian philosophy, albeit modified by an institutional administration perspective.
After 1911, writers, historians and scholars in China and abroad generally deprecated the late imperial system and its failures. However, in the 21st century, a highly favorable revisionism has emerged in the popular culture, in both the media and social media. Building pride in Chinese history, nationalists have portrayed Imperial China as benevolent, strong and more advanced than the West. They blame ugly wars and diplomatic controversies on imperialist exploitation by Western nations and Japan. Although officially still communist and Maoist, in practice China's rulers have used this grassroots settlement to proclaim their current policies are restoring China's historical glory. General Secretary Xi Jinping has, "sought nothing less than parity between Beijing and Washington--and promised to restore China to its historical glory." Florian Schneider argues that nationalism in China in the early twenty-first century is largely a product of the digital revolution and that a large fraction of the population participates as readers and commentators who relate ideas to their friends over the internet. | [
{
"paragraph_id": 0,
"text": "Chinese historiography is the study of the techniques and sources used by historians to develop the recorded history of China.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The recording of events in Chinese history dates back to the Shang dynasty (c. 1600–1046 BC). Many written examples survive of ceremonial inscriptions, divinations and records of family names, which were carved or painted onto tortoise shell or bones. The uniformly religious context of Shang written records makes avoidance of preservation bias important when interpreting Shang history. The first conscious attempt to record history in China may have been the inscription on the Zhou dynasty bronze Shi Qiang pan. This and thousands of other Chinese bronze inscriptions form our primary sources for the period in which they were interred in elite burials.",
"title": "Overview of Chinese history"
},
{
"paragraph_id": 2,
"text": "The oldest surviving history texts of China were compiled in the Book of Documents (Shujing). The Spring and Autumn Annals (Chunqiu), the official chronicle of the State of Lu, cover the period from 722 to 481 BC and are among the earliest surviving Chinese historical texts to be arranged as annals. The compilations of both of these works are traditionally ascribed to Confucius. The Zuo zhuan, attributed to Zuo Qiuming in the 5th century BC, is the earliest Chinese work of narrative history and covers the period from 722 to 468 BC. The anonymous Zhan Guo Ce was a renowned ancient Chinese historical work composed of sporadic materials on the Warring States period between the 3rd and 1st centuries BC.",
"title": "Overview of Chinese history"
},
{
"paragraph_id": 3,
"text": "The first systematic Chinese historical text, the Records of the Grand Historian (Shiji), was written by Sima Qian (c. 145 or 135–86 BC) based on work by his father, Sima Tan, during the Han Dynasty. It covers the period from the time of the Yellow Emperor until the author's own lifetime. Two instances of systematic book-burning and a palace fire in the preceding centuries narrowed the sources available for this work. Because of this highly praised and frequently copied work, Sima Qian is often regarded as the father of Chinese historiography. The Twenty-Four Histories, the official histories of the dynasties considered legitimate by imperial Chinese historians, all copied Sima Qian's format. Typically, rulers initiating a new dynasty would employ scholars to compile a final history from the records of the previous one, using a broad variety of sources.",
"title": "Overview of Chinese history"
},
{
"paragraph_id": 4,
"text": "Around the turn of the millennium, father–son imperial librarians Liu Xiang and Liu Xin edited and catalogued a large number of early texts, including each individual text listed by name above. Much transmitted literature surviving today is known to be ultimately the version they edited down from a larger volume of material available at the time. In 190, the imperial capital was again destroyed by arson, causing the loss of significant amounts of historical material.",
"title": "Overview of Chinese history"
},
{
"paragraph_id": 5,
"text": "The Shitong was the first Chinese work about historiography. It was compiled by Liu Zhiji between 708 and 710 AD. The book describes the general pattern of the official dynastic histories with regard to the structure, method, arrangement, sequence, caption, and commentary, dating back to the Warring States period.",
"title": "Overview of Chinese history"
},
{
"paragraph_id": 6,
"text": "The Zizhi Tongjian was a pioneering reference work of Chinese historiography. Emperor Yingzong of Song ordered Sima Guang and other scholars to begin compiling this universal history of China in 1065, and they presented it to his successor Shenzong in 1084. It contains 294 volumes and about three million characters, and it narrates the history of China from 403 BC to the beginning of the Song dynasty in 959. This style broke the nearly thousand-year tradition of Sima Qian, which employed annals for imperial reigns but biographies or treatises for other topics. The more consistent style of the Zizhi Tongjian was not followed by later official histories. In the mid 13th century, Ouyang Xiu was heavily influenced by the work of Xue Juzheng. This led to the creation of the New History of the Five Dynasties, which covered five dynasties in over 70 chapters.",
"title": "Overview of Chinese history"
},
{
"paragraph_id": 7,
"text": "Toward the end of the Qing dynasty in the early 20th century, scholars looked to Japan and the West for models. In the late 1890s, although deeply learned in the traditional forms, Liang Qichao began to publish extensive and influential studies and polemics that converted young readers to a new type of historiography that Liang regarded as more scientific. Liu Yizheng published several specialized history works including History of Chinese Culture. This next generation became professional historians, training and teaching in universities. They included Chang Chi-yun, Gu Jiegang, Fu Sinian, and Tsiang Tingfu, who were PhDs from Columbia University; and Chen Yinke, who conducted his investigations into medieval Chinese history in both Europe and the United States. Other historians, such as Qian Mu, who was trained largely through independent study, were more conservative but remained innovative in their response to world trends. In the 1920s, wide-ranging scholars, such as Guo Moruo, adapted Marxism in order to portray China as a nation among nations, rather than having an exotic and isolated history. The ensuing years saw historians such as Wu Han master both Western theories, including Marxism, and Chinese learning.",
"title": "Overview of Chinese history"
},
{
"paragraph_id": 8,
"text": "Like the three ages of the Greek poet Hesiod, the oldest Chinese historiography viewed mankind as living in a fallen age of depravity, cut off from the virtues of the past, as Confucius and his disciples revered the sage kings Emperor Yao and Emperor Shun.",
"title": "Key organizing concepts"
},
{
"paragraph_id": 9,
"text": "Unlike Hesiod's system, however, the Duke of Zhou's idea of the Mandate of Heaven as a rationale for dethroning the supposedly divine Zi clan led subsequent historians to see man's fall as a cyclical pattern. In this view, a new dynasty is founded by a morally upright founder, but his successors cannot help but become increasingly corrupt and dissolute. This immorality removes the dynasty's divine favor and is manifested by natural disasters (particularly floods), rebellions, and foreign invasions. Eventually, the dynasty becomes weak enough to be replaced by a new one, whose founder is able to rectify many of society's problems and begin the cycle anew. Over time, many people felt a full correction was not possible, and that the golden age of Yao and Shun could not be attained.",
"title": "Key organizing concepts"
},
{
"paragraph_id": 10,
"text": "This teleological theory implies that there can be only one rightful sovereign under heaven at a time. Thus, despite the fact that Chinese history has had many lengthy and contentious periods of disunity, a great effort was made by official historians to establish a legitimate precursor whose fall allowed a new dynasty to acquire its mandate. Similarly, regardless of the particular merits of individual emperors, founders would be portrayed in more laudatory terms, and the last ruler of a dynasty would always be castigated as depraved and unworthy – even when that was not the case. Such a narrative was employed after the fall of the empire by those compiling the history of the Qing, and by those who justified the attempted restorations of the imperial system by Yuan Shikai and Zhang Xun.",
"title": "Key organizing concepts"
},
{
"paragraph_id": 11,
"text": "As early as the 1930s, the American scholar Owen Lattimore argued that China was the product of the interaction of farming and pastoral societies, rather than simply the expansion of the Han people. Lattimore did not accept the more extreme Sino-Babylonian theories that the essential elements of early Chinese technology and religion had come from Western Asia, but he was among the scholars to argue against the assumption they had all been indigenous.",
"title": "Key organizing concepts"
},
{
"paragraph_id": 12,
"text": "Both the Republic of China and the People's Republic of China hold the view that Chinese history should include all the ethnic groups of the lands held by the Qing dynasty during its territorial peak, with these ethnicities forming part of the Zhonghua minzu (Chinese nation). This view is in contrast with Han chauvinism promoted by the Qing-era Tongmenghui. This expanded view encompasses internal and external tributary lands, as well as conquest dynasties in the history of a China seen as a coherent multi-ethnic nation since time immemorial, incorporating and accepting the contributions and cultures of non-Han ethnicities.",
"title": "Key organizing concepts"
},
{
"paragraph_id": 13,
"text": "The acceptance of this view by ethnic minorities sometimes depends on their views on present-day issues. The 14th Dalai Lama, long insistent on Tibet's history being separate from that of China, conceded in 2005 that Tibet \"is a part of\" China's \"5,000-year history\" as part of a new proposal for Tibetan autonomy. Korean nationalists have virulently reacted against China's application to UNESCO for recognition of the Goguryeo tombs in Chinese territory. The absolute independence of Goguryeo is a central aspect of Korean identity, because, according to Korean legend, Goguryeo was independent of China and Japan, compared to subordinate states such as the Joseon dynasty and the Korean Empire. The legacy of Genghis Khan has been contested between China, Mongolia, and Russia, all three states having significant numbers of ethnic Mongols within their borders and holding territory that was conquered by the Khan.",
"title": "Key organizing concepts"
},
{
"paragraph_id": 14,
"text": "The Jin dynasty tradition of a new dynasty composing the official history for its preceding dynasty/dynasties has been seen to foster an ethnically inclusive interpretation of Chinese history. The compilation of official histories usually involved monumental intellectual labor. The Yuan and Qing dynasties, ruled by the Mongols and Manchus, faithfully carried out this practice, composing the official Chinese-language histories of the Han-ruled Song and Ming dynasties, respectively.",
"title": "Key organizing concepts"
},
{
"paragraph_id": 15,
"text": "Recent Western scholars have reacted against the ethnically inclusive narrative in Communist-sponsored history, by writing revisionist histories of China such as the New Qing History that feature, according to James A. Millward, \"a degree of 'partisanship' for the indigenous underdogs of frontier history\". Scholarly interest in writing about Chinese minorities from non-Chinese perspectives is growing. So too is the rejection of a unified cultural narrative in early China. Historians engaging with archaeological progress find increasingly demonstrated a rich amalgam of diverse cultures in regions the received literature positions as homogeneous.",
"title": "Key organizing concepts"
},
{
"paragraph_id": 16,
"text": "Most Chinese history that is published in the People's Republic of China is based on a Marxist interpretation of history. These theories were first applied in the 1920s by Chinese scholars such as Guo Moruo, and became orthodoxy in academic study after 1949. The Marxist view of history is that history is governed by universal laws and that according to these laws, a society moves through a series of stages, with the transition between stages being driven by class struggle. These stages are:",
"title": "Key organizing concepts"
},
{
"paragraph_id": 17,
"text": "The official historical view within the People's Republic of China associates each of these stages with a particular era in Chinese history.",
"title": "Key organizing concepts"
},
{
"paragraph_id": 18,
"text": "Because of the strength of the Chinese Communist Party and the importance of the Marxist interpretation of history in legitimizing its rule, it was for many years difficult for historians within the PRC to actively argue in favor of non-Marxist and anti-Marxist interpretations of history. However, this political restriction is less confining than it may first appear in that the Marxist historical framework is surprisingly flexible, and it is a rather simple matter to modify an alternative historical theory to use language that at least does not challenge the Marxist interpretation of history.",
"title": "Key organizing concepts"
},
{
"paragraph_id": 19,
"text": "Partly because of the interest of Mao Zedong, historians in the 1950s took a special interest in the role of peasant rebellions in Chinese history and compiled documentary histories to examine them.",
"title": "Key organizing concepts"
},
{
"paragraph_id": 20,
"text": "There are several problems associated with imposing Marx's European-based framework on Chinese history. First, slavery existed throughout China's history but never as the primary form of labor. While the Zhou and earlier dynasties may be labeled as feudal, later dynasties were much more centralized than how Marx analyzed their European counterparts as being. To account for the discrepancy, Chinese Marxists invented the term \"bureaucratic feudalism\". The placement of the Tang as the beginning of the bureaucratic phase rests largely on the replacement of patronage networks with the imperial examination. Some world-systems analysts, such as Janet Abu-Lughod, claim that analysis of Kondratiev waves shows that capitalism first arose in Song dynasty China, although widespread trade was subsequently disrupted and then curtailed.",
"title": "Key organizing concepts"
},
{
"paragraph_id": 21,
"text": "The Japanese scholar Tanigawa Michio, writing in the 1970s and 1980s, set out to revise the generally Marxist views of China prevalent in post-war Japan. Tanigawa writes that historians in Japan fell into two schools. One held that China followed the set European pattern which Marxists thought to be universal; that is, from ancient slavery to medieval feudalism to modern capitalism; while another group argued that \"Chinese society was extraordinarily saturated with stagnancy, as compared to the West\" and assumed that China existed in a \"qualitatively different historical world from Western society\". That is, there is an argument between those who see \"unilinear, monistic world history\" and those who conceive of a \"two-tracked or multi-tracked world history\". Tanigawa reviewed the applications of these theories in Japanese writings about Chinese history and then tested them by analyzing the Six Dynasties 220–589 CE period, which Marxist historians saw as feudal. His conclusion was that China did not have feudalism in the sense that Marxists use, that Chinese military governments did not lead to a European-style military aristocracy. The period established social and political patterns which shaped China's history from that point on.",
"title": "Key organizing concepts"
},
{
"paragraph_id": 22,
"text": "There was a gradual relaxation of Marxist interpretation after the death of Mao Zedong in 1976, which was accelerated after the Tian'anmen Square protest and other revolutions in 1989, which damaged Marxism's ideological legitimacy in the eyes of Chinese academics.",
"title": "Key organizing concepts"
},
{
"paragraph_id": 23,
"text": "This view of Chinese history sees Chinese society as a traditional society needing to become modern, usually with the implicit assumption of Western society as the model. Such a view was common amongst European and American historians during the 19th and early 20th centuries, but is now criticized for being a Eurocentric viewpoint, since such a view permits an implicit justification for breaking the society from its static past and bringing it into the modern world under European direction.",
"title": "Key organizing concepts"
},
{
"paragraph_id": 24,
"text": "By the mid-20th century, it was increasingly clear to historians that the notion of \"changeless China\" was untenable. A new concept, popularized by John Fairbank, was the notion of \"change within tradition\", which argued that China did change in the pre-modern period but that this change existed within certain cultural traditions. This notion has also been subject to the criticism that to say \"China has not changed fundamentally\" is tautological, since it requires that one look for things that have not changed and then arbitrarily define those as fundamental.",
"title": "Key organizing concepts"
},
{
"paragraph_id": 25,
"text": "Nonetheless, studies seeing China's interaction with Europe as the driving force behind its recent history are still common. Such studies may consider the First Opium War as the starting point for China's modern period. Examples include the works of H.B. Morse, who wrote chronicles of China's international relations such as Trade and Relations of the Chinese Empire. The Chinese convention is to use the word jindai (\"modern\") to refer to a timeframe for modernity which begins with the Opium wars and continues through the May Fourth period.",
"title": "Key organizing concepts"
},
{
"paragraph_id": 26,
"text": "In the 1950s, several of Fairbank's students argued that Confucianism was incompatible with modernity. Joseph Levenson and Mary C. Wright, and Albert Feuerwerker argued in effect that traditional Chinese values were a barrier to modernity and would have to be abandoned before China could make progress. Wright concluded, \"The failure of the T'ung-chih [Tongzhi] Restoration demonstrated with a rare clarity that even in the most favorable circumstances there is no way in which an effective modern state can be grafted onto a Confucian society. Yet in the decades that followed, the political ideas that had been tested and, for all their grandeur, found wanting, were never given a decent burial.\"",
"title": "Key organizing concepts"
},
{
"paragraph_id": 27,
"text": "In a different view of modernization, the Japanese historian Naito Torajiro argued that China reached modernity during its mid-Imperial period, centuries before Europe. He believed that the reform of the civil service into a meritocratic system and the disappearance of the ancient Chinese nobility from the bureaucracy constituted a modern society. The problem associated with this approach is the subjective meaning of modernity. The Chinese nobility had been in decline since the Qin dynasty, and while the exams were largely meritocratic, performance required time and resources that meant examinees were still typically from the gentry. Moreover, expertise in the Confucian classics did not guarantee competent bureaucrats when it came to managing public works or preparing a budget. Confucian hostility to commerce placed merchants at the bottom of the four occupations, itself an archaism maintained by devotion to classic texts. The social goal continued to be to invest in land and enter the gentry, ideas more like those of the physiocrats than those of Adam Smith.",
"title": "Key organizing concepts"
},
{
"paragraph_id": 28,
"text": "With ideas derived from Marx and Max Weber, Karl August Wittfogel argued that bureaucracy arose to manage irrigation systems. Despotism was needed to force the people into building canals, dikes, and waterways to increase agriculture. Yu the Great, one of China's legendary founders, is known for his control of the floods of the Yellow River. The hydraulic empire produces wealth from its stability; while dynasties may change, the structure remains intact until destroyed by modern powers. In Europe abundant rainfall meant less dependence on irrigation. In the Orient natural conditions were such that the bulk of the land could not be cultivated without large-scale irrigation works. As only a centralized administration could organize the building and maintenance of large-scale systems of irrigation, the need for such systems made bureaucratic despotism inevitable in Oriental lands.",
"title": "Key organizing concepts"
},
{
"paragraph_id": 29,
"text": "When Wittfogel published his Oriental Despotism: A Comparative Study of Total Power, critics pointed out that water management was given the high status China accorded to officials concerned with taxes, rituals, or fighting off bandits. The theory also has a strong orientalist bent, regarding all Asian states as generally the same while finding reasons for European polities not fitting the pattern.",
"title": "Key organizing concepts"
},
{
"paragraph_id": 30,
"text": "While Wittfogel's theories were not popular among Marxist historians in China, the economist Chi Ch'ao-ting used them in his influential 1936 book, Key Economic Areas in Chinese History, as Revealed in the Development of Public Works for Water-Control. The book identified key areas of grain production which, when controlled by a strong political power, permitted that power to dominate the rest of the country and enforce periods of stability.",
"title": "Key organizing concepts"
},
{
"paragraph_id": 31,
"text": "Convergence theory, including Hu Shih and Ray Huang's involution theory, holds that the past 150 years have been a period in which Chinese and Western civilization have been in the process of converging into a world civilization. Such a view is heavily influenced by modernization theory but, in China's case, it is also strongly influenced by indigenous sources such as the notion of Shijie Datong or \"Great Unity\". It has tended to be less popular among more recent historians, as postmodern Western historians discount overarching narratives, and nationalist Chinese historians feel similar about narratives failing to account for some special or unique characteristics of Chinese culture.",
"title": "Key organizing concepts"
},
{
"paragraph_id": 32,
"text": "Closely related are colonial and anti-imperialist narratives. These often merge or are part of Marxist critiques from within China or the former Soviet Union, or are postmodern critiques such as Edward Said's Orientalism, which fault traditional scholarship for trying to fit West, South, and East Asia's histories into European categories unsuited to them. With regard to China particularly, T.F. Tsiang and John Fairbank used newly opened archives in the 1930s to write modern history from a Chinese point of view. Fairbank and Teng Ssu-yu then edited the influential volume China's Response to the West (1953). This approach was attacked for ascribing the change in China to outside forces. In the 1980s, Paul Cohen, a student of Fairbank's, issued a call for a more \"China-Centered history of China\".",
"title": "Key organizing concepts"
},
{
"paragraph_id": 33,
"text": "The schools of thought on the 1911 Revolution have evolved from the early years of the Republic. The Marxist view saw the events of 1911 as a bourgeois revolution. In the 1920s, the Nationalist Party issued a theory of three political stages based on Sun Yatsen's writings:",
"title": "Key organizing concepts"
},
{
"paragraph_id": 34,
"text": "The most obvious criticism is the near-identical nature of \"political tutelage\" and of a \"constitutional democracy\" consisting only of the one-party rule until the 1990s. Against this, Chen Shui-bian proposed his own four-stage theory.",
"title": "Key organizing concepts"
},
{
"paragraph_id": 35,
"text": "Postmodern interpretations of Chinese history tend to reject narrative history and instead focus on a small subset of Chinese history, particularly the daily lives of ordinary people in particular locations or settings.",
"title": "Key organizing concepts"
},
{
"paragraph_id": 36,
"text": "Zooming out from the dynastic cycle but maintaining focus on power dynamics, the following general periodization, based on the most powerful groups and the ways that power is used, has been proposed for Chinese history:",
"title": "Key organizing concepts"
},
{
"paragraph_id": 37,
"text": "From the beginning of Communist rule in 1949 until the 1980s, Chinese historical scholarship focused largely on the officially sanctioned Marxist theory of class struggle. From the time of Deng Xiaoping (1978–1992) on, there has been a drift towards a Marxist-inspired Chinese nationalist perspective, and consideration of China's contemporary international status has become of paramount importance in historical studies. The current focus tends to be on specifics of civilization in ancient China, and the general paradigm of how China has responded to the dual challenges of interactions with the outside world and modernization in the post-1700 era. Long abandoned as a research focus among most Western scholars due to postmodernism's influence, this remains the primary interest for most historians inside China.",
"title": "Recent trends"
},
{
"paragraph_id": 38,
"text": "The late 20th century and early 21st century have seen numerous studies of Chinese history that challenge traditional paradigms. The field is rapidly evolving, with much new scholarship, often based on the realization that there is much about Chinese history that is unknown or controversial. For example, an active topic concerns whether the typical Chinese peasant in 1900 was seeing his life improve. In addition to the realization that there are major gaps in our knowledge of Chinese history is the equal realization that there are tremendous quantities of primary source material that have not yet been analyzed. Scholars are using previously overlooked documentary evidence, such as masses of government and family archives, and economic records such as census tax rolls, price records, and land surveys. In addition, artifacts such as vernacular novels, how-to manuals, and children's books are analyzed for clues about day-to-day life.",
"title": "Recent trends"
},
{
"paragraph_id": 39,
"text": "Recent Western scholarship of China has been heavily influenced by postmodernism, and has questioned modernist narratives of China's backwardness and lack of development. The desire to challenge the preconception that 19th-century China was weak, for instance, has led to a scholarly interest in Qing expansion into Central Asia. Postmodern scholarship largely rejects grand narratives altogether, preferring to publish empirical studies on the socioeconomics, and political or cultural dynamics, of smaller communities within China.",
"title": "Recent trends"
},
{
"paragraph_id": 40,
"text": "As of at least 2023, there has been a surge of historical writing about key leaders of the Nationalist period. A significant amount of new writing includes texts written for a general (as opposed to only academic) audience. There has been an increasingly nuanced portrayal of Chiang Kai-shek, particularly in more favorably evaluating his leadership during the War of Resistance against Japan and highlighting his position as one of the Big Four allied leaders. Recently released archival sources on the Nationalist era, including the Chiang Kai-shek diaries at Stanford University's Hoover Institution, have contributed to a surge in academic publishing on the period.",
"title": "Recent trends"
},
{
"paragraph_id": 41,
"text": "In China, historical scholarship remains largely nationalist and modernist or even traditionalist. The legacies of the modernist school (such as Lo Hsiang-lin) and the traditionalist school (such as Qian Mu (Chien Mu)) remain strong in Chinese circles. The more modernist works focus on imperial systems in China and employ the scientific method to analyze epochs of Chinese dynasties from geographical, genealogical, and cultural artifacts. For example, using Carbon-14 dating and geographical records to correlate climates with cycles of calm and calamity in Chinese history. The traditionalist school of scholarship resorts to official imperial records and colloquial historical works, and analyzes the rise and fall of dynasties using Confucian philosophy, albeit modified by an institutional administration perspective.",
"title": "Recent trends"
},
{
"paragraph_id": 42,
"text": "After 1911, writers, historians and scholars in China and abroad generally deprecated the late imperial system and its failures. However, in the 21st century, a highly favorable revisionism has emerged in the popular culture, in both the media and social media. Building pride in Chinese history, nationalists have portrayed Imperial China as benevolent, strong and more advanced than the West. They blame ugly wars and diplomatic controversies on imperialist exploitation by Western nations and Japan. Although officially still communist and Maoist, in practice China's rulers have used this grassroots settlement to proclaim their current policies are restoring China's historical glory. General Secretary Xi Jinping has, \"sought nothing less than parity between Beijing and Washington--and promised to restore China to its historical glory.\" Florian Schneider argues that nationalism in China in the early twenty-first century is largely a product of the digital revolution and that a large fraction of the population participates as readers and commentators who relate ideas to their friends over the internet.",
"title": "Recent trends"
}
] | Chinese historiography is the study of the techniques and sources used by historians to develop the recorded history of China. | 2001-11-19T20:14:39Z | 2023-12-15T11:16:58Z | [
"Template:Citation",
"Template:Cite serial",
"Template:Historiography",
"Template:Short description",
"Template:Citation needed",
"Template:Sfnb",
"Template:Cite book",
"Template:Colbegin",
"Template:History of China",
"Template:Main",
"Template:Cite journal",
"Template:Cite web",
"Template:Cite news",
"Template:ISBN",
"Template:Colend",
"Template:Commonscat",
"Template:Circa",
"Template:Nbsp",
"Template:China topics",
"Template:Reflist",
"Template:Expand Russian",
"Template:Rp"
] | https://en.wikipedia.org/wiki/Chinese_historiography |
7,175 | Chinese Communist Party | The Chinese Communist Party (CCP), officially the Communist Party of China (CPC), is the founding and sole ruling party of the People's Republic of China (PRC). Under the leadership of Mao Zedong, the CCP emerged victorious in the Chinese Civil War against the Kuomintang. In 1949, Mao proclaimed the establishment of the People's Republic of China. Since then, the CCP has governed China and has had sole control over the People's Liberation Army (PLA). Successive leaders of the CCP have added their own theories to the party's constitution, which outlines the party's ideology, collectively referred to as socialism with Chinese characteristics. As of 2023, the CCP has more than 98 million members, making it the second largest political party by membership in the world after India's Bharatiya Janata Party.
In 1921, Chen Duxiu and Li Dazhao led the founding of the CCP with the help of the Far Eastern Bureau of the Communist Party of the Soviet Union and Far Eastern Bureau of the Communist International. For the first six years of its history, the CCP aligned itself with the Kuomintang (KMT) as the organized left wing of the larger nationalist movement. However, when the right wing of the KMT, led by Chiang Kai-shek, turned on the CCP and massacred tens of thousands of the party's members, the two parties split and began a prolonged civil war. During the next ten years of guerrilla warfare, Mao Zedong rose to become the most influential figure in the CCP, and the party established a strong base among the rural peasantry with its land reform policies. Support for the CCP continued to grow throughout the Second Sino-Japanese War, and after the Japanese surrender in 1945, the CCP emerged triumphant in the communist revolution against the Nationalist government. After the KMT's retreat to Taiwan, the CCP established the People's Republic of China on 1 October 1949.
Mao Zedong continued to be the most influential member of the CCP until his death in 1976, although he periodically withdrew from public leadership as his health deteriorated. Under Mao, the party completed its land reform program, launched a series of five-year plans, and eventually split with the Soviet Union. Although Mao attempted to purge the party of capitalist and reactionary elements during the Cultural Revolution, after his death, these policies were only briefly continued by the Gang of Four before a less radical faction seized control. During the 1980s, Deng Xiaoping directed the CCP away from Maoist orthodoxy and towards a policy of economic liberalization. The official explanation for these reforms was that China was still in the primary stage of socialism, a developmental stage similar to the capitalist mode of production. Since the collapse of the Eastern Bloc and the dissolution of the Soviet Union in 1991, the CCP has focused on maintaining its relations with the ruling parties of the remaining socialist states and continues to participate in the International Meeting of Communist and Workers' Parties each year. The CCP has also established relations with several non-communist parties, including dominant nationalist parties of many developing countries in Africa, Asia and Latin America, as well as social democratic parties in Europe.
The Chinese Communist Party is organized based on democratic centralism, a principle that entails open policy discussion on the condition of unity among party members in upholding the agreed-upon decision. The highest body of the CCP is the National Congress, convened every fifth year. When the National Congress is not in session, the Central Committee is the highest body, but since that body usually only meets once a year, most duties and responsibilities are vested in the Politburo and its Standing Committee. Members of the latter are seen as the top leadership of the party and the state. Today the party's leader holds the offices of general secretary (responsible for civilian party duties), Chairman of the Central Military Commission (CMC) (responsible for military affairs), and State President (a largely ceremonial position). Because of these posts, the party leader is seen as the country's paramount leader. The current leader is Xi Jinping, who was elected at the 1st Plenary Session of the 18th Central Committee held on 15 November 2012 and has been reelected twice, on 25 October 2017 by the 19th Central Committee and on 10 October 2022 by the 20th Central Committee.
The CCP traces its origins to the May Fourth Movement of 1919, during which radical Western ideologies like Marxism and anarchism gained traction among Chinese intellectuals. Other influences stemming from the October Revolution and Marxist theory inspired the CCP. Chen Duxiu and Li Dazhao were among the first to publicly support Leninism and world revolution. Both regarded the October Revolution in Russia as groundbreaking, believing it to herald a new era for oppressed countries everywhere. Study circles were, according to Cai Hesen, "the rudiments [of our party]". Several study circles were established during the New Culture Movement, but by 1920 many grew skeptical about their ability to bring about reforms.
The CCP was founded on 1 July 1921 with the help of the Far Eastern Bureau of the Communist Party of the Soviet Union and Far Eastern Secretariat of the Communist International, according to the party's official account of its history. However, party documents suggest that the party's actual founding date was 23 July 1921, the first day of the 1st National Congress of the CCP. The founding National Congress of the CCP was held 23–31 July 1921. With only 50 members in the beginning of 1921, among them Chen Duxiu, Li Dazhao and Mao Zedong, the CCP organization and authorities grew tremendously. While it was originally held in a house in the Shanghai French Concession, French police interrupted the meeting on 30 July and the congress was moved to a tourist boat on South Lake in Jiaxing, Zhejiang province. A dozen delegates attended the congress, with neither Li nor Chen being able to attend, the latter sending a personal representative in his stead. The resolutions of the congress called for the establishment of a communist party as a branch of the Communist International (Comintern) and elected Chen as its leader. Chen then served as the first general secretary of the CCP and was referred to as "China's Lenin".
The Soviets hoped to foster pro-Soviet forces in East Asia to fight against anti-communist countries, particularly Japan. They attempted to contact the warlord Wu Peifu but failed. The Soviets then contacted the Kuomintang (KMT), which was leading the Guangzhou government parallel to the Beiyang government. On 6 October 1923, the Comintern sent Mikhail Borodin to Guangzhou, and the Soviets established friendly relations with the KMT. The Central Committee of the CCP, Soviet leader Joseph Stalin, and the Comintern all hoped that the CCP would eventually control the KMT and called their opponents "rightists". KMT leader Sun Yat-sen eased the conflict between the communists and their opponents. CCP membership grew tremendously after the 4th congress in 1925, from 900 to 2,428. The CCP still treats Sun Yat-sen as one of the founders of their movement and claim descent from him as he is viewed as a proto-communist and the economic element of Sun's ideology was socialism. Sun stated, "Our Principle of Livelihood is a form of communism".
The communists dominated the left wing of the KMT and struggled for power with the party's right-wing factions. When Sun Yat-sen died in March 1925, he was succeeded by a rightist, Chiang Kai-shek, who initiated moves to marginalize the position of the communists. Chiang, Sun's former assistant, was not actively anti-communist at that time, even though he hated the theory of class struggle and the CCP's seizure of power. The communists proposed removing Chiang's power. When Chiang gradually gained the support of Western countries, the conflict between him and the communists became more and more intense. Chiang asked the Kuomintang to join the Comintern to rule out the secret expansion of communists within the KMT, while Chen Duxiu hoped that the communists would completely withdraw from the KMT.
In April 1927, both Chiang and the CCP were preparing for conflict. Fresh from the success of the Northern Expedition to overthrow the warlords, Chiang Kai-shek turned on the communists, who by now numbered in the tens of thousands across China. Ignoring the orders of the Wuhan-based KMT government, he marched on Shanghai, a city controlled by communist militias. Although the communists welcomed Chiang's arrival, he turned on them, massacring 5,000 with the aid of the Green Gang. Chiang's army then marched on Wuhan but was prevented from taking the city by CCP General Ye Ting and his troops. Chiang's allies also attacked communists; for example, in Beijing, Li Dazhao and 19 other leading communists were executed by Zhang Zuolin. Angered by these events, the peasant movement supported by the CCP became more violent. Ye Dehui, a famous scholar, was killed by communists in Changsha, and in revenge, KMT general He Jian and his troops gunned down hundreds of peasant militiamen. That May, tens of thousands of communists and their sympathizers were killed by KMT troops, with the CCP losing approximately 15,000 of its 25,000 members.
The CCP continued supporting the Wuhan KMT government, but on 15 July 1927 the Wuhan government expelled all communists from the KMT. The CCP reacted by founding the Workers' and Peasants' Red Army of China, better known as the "Red Army", to battle the KMT. A battalion led by General Zhu De was ordered to take the city of Nanchang on 1 August 1927 in what became known as the Nanchang uprising. Initially successful, Zhu and his troops were forced to retreat after five days, marching south to Shantou, and from there being driven into the wilderness of Fujian. Mao Zedong was appointed commander-in-chief of the Red Army, and led four regiments against Changsha in the Autumn Harvest Uprising, hoping to spark peasant uprisings across Hunan. His plan was to attack the KMT-held city from three directions on 9 September, but the Fourth Regiment deserted to the KMT cause, attacking the Third Regiment. Mao's army made it to Changsha but could not take it; by 15 September, he accepted defeat, with 1,000 survivors marching east to the Jinggang Mountains of Jiangxi.
The near destruction of the CCP's urban organizational apparatus led to institutional changes within the party. The party adopted democratic centralism, a way to organize revolutionary parties, and established a politburo to function as the standing committee of the central committee. The result was increased centralization of power within the party. At every level of the party this was duplicated, with standing committees now in effective control. After being expelled from the party, Chen Duxiu went on to lead China's Trotskyist movement. Li Lisan was able to assume de facto control of the party organization by 1929–1930. Li's leadership was a failure, leaving the CCP on the brink of destruction. The Comintern became involved, and by late 1930, his powers had been taken away. By 1935 Mao had become a member of Politburo Standing Committee of the CCP and the party's informal military leader, with Zhou Enlai and Zhang Wentian, the formal head of the party, serving as his informal deputies. The conflict with the KMT led to the reorganization of the Red Army, with power now centralized in the leadership through the creation of CCP political departments charged with supervising the army.
The Xi'an Incident of December 1936 paused the conflict between the CCP and the KMT. Under pressure from Marshal Zhang Xueliang and the CCP, Chiang Kai-shek finally agreed to a Second United Front focused on repelling the Japanese invaders. While the front formally existed until 1945, all collaboration between the two parties had effectively ended by 1940. Despite their formal alliance, the CCP used the opportunity to expand and carve out independent bases of operations to prepare for the coming war with the KMT. In 1939 the KMT began to restrict CCP expansion within China. This led to frequent clashes between CCP and KMT forces which subsided rapidly on the realisation on both sides that civil war amidst a foreign invasion was not an option. By 1943, the CCP was again actively expanding its territory at the expense of the KMT.
Mao Zedong became the Chairman of the CCP in 1945. After the Japanese surrender in 1945, the war between the CCP and the KMT began again in earnest. The 1945–49 period had four stages; the first was from August 1945 (when the Japanese surrendered) to June 1946 (when the peace talks between the CCP and the KMT ended). By 1945, the KMT had three times more soldiers under its command than the CCP and initially appeared to be prevailing. With the cooperation of the U.S. and Japan, the KMT was able to retake major parts of the country. However, KMT rule over the reconquered territories proved unpopular because of its endemic political corruption. Notwithstanding its numerical superiority, the KMT failed to reconquer the rural territories which made up the CCP's stronghold. Around the same time, the CCP launched an invasion of Manchuria, where they were assisted by the Soviet Union. The second stage, lasting from July 1946 to June 1947, saw the KMT extend its control over major cities such as Yan'an, the CCP headquarters, for much of the war. The KMT's successes were hollow; the CCP had tactically withdrawn from the cities, and instead undermined KMT rule there by instigating protests amongst students and intellectuals. The KMT responded to these demonstrations with heavy-handed repression. In the meantime, the KMT was struggling with factional infighting and Chiang Kai-shek's autocratic control over the party, which weakened its ability to respond to attacks. The third stage, lasting from July 1947 to August 1948, saw a limited counteroffensive by the CCP. The objective was clearing "Central China, strengthening North China, and recovering Northeast China." This operation, coupled with military desertions from the KMT, resulted in the KMT losing 2 million of its 3 million troops by the spring of 1948, and saw a significant decline in support for KMT rule. The CCP was consequently able to cut off KMT garrisons in Manchuria and retake several territories. The last stage, lasting from September 1948 to December 1949, saw the communists go on the offensive and the collapse of KMT rule in mainland China as a whole. Mao's proclamation of the founding of the People's Republic of China on 1 October 1949 marked the end of the second phase of the Chinese Civil War (or the Chinese Communist Revolution, as it is called by the CCP).
Mao proclaimed the founding of the People's Republic of China (PRC) before a massive crowd at Tiananmen Square on 1 October 1949. The CCP headed the Central People's Government. From this time through the 1980s, top leaders of the CCP (such as Mao Zedong, Lin Biao, Zhou Enlai and Deng Xiaoping) were largely the same military leaders prior to the PRC's founding. As a result, informal personal ties between political and military leaders dominated civil-military relations.
Stalin proposed a one-party constitution when Liu Shaoqi visited the Soviet Union in 1952. The constitution of the PRC in 1954 subsequently abolished the previous coalition government and established the CCP's one-party system. In 1957, the CCP launched the Anti-Rightist Campaign against political dissidents and prominent figures from minor parties, which resulted in the political persecution of at least 550,000 people. The campaign significantly damaged the limited pluralistic nature in the socialist republic and solidified the country's status as a de facto one-party state.
The Anti-Rightist Campaign led to the catastrophic results of the Second Five Year Plan from 1958 to 1962, known as the Great Leap Forward. In an effort to transform the country from an agrarian economy into an industrialized one, the CCP collectivized farmland, formed people's communes, and diverted labor to factories. General mismanagement and exaggerations of harvests by CCP officials led to the Great Chinese Famine, which resulted in an estimated 15 to 45 million deaths, making it the largest famine in recorded history.
During the 1960s and 1970s, the CCP experienced a significant ideological separation from the Communist Party of the Soviet Union which was going through a period of "de-Stalinization" under Nikita Khrushchev. By that time, Mao had begun saying that the "continued revolution under the dictatorship of the proletariat" stipulated that class enemies continued to exist even though the socialist revolution seemed to be complete, leading to the Cultural Revolution in which millions were persecuted and killed. During the Cultural Revolution, party leaders such as Liu Shaoqi, Deng Xiaoping, Peng Dehuai, and He Long were purged or exiled, and the Gang of Four, led by Mao's wife Jiang Qing, emerged to fill in the power vacuum left behind.
Following Mao's death in 1976, a power struggle between CCP chairman Hua Guofeng and vice-chairman Deng Xiaoping erupted. Deng won the struggle, and became China's paramount leader in 1978. Deng, alongside Hu Yaobang and Zhao Ziyang, spearheaded the "reform and opening-up" policies, and introduced the ideological concept of socialism with Chinese characteristics, opening China to the world's markets. In reversing some of Mao's "leftist" policies, Deng argued that a socialist state could use the market economy without itself being capitalist. While asserting the political power of the CCP, the change in policy generated significant economic growth. This was justified on the basis that "Practice is the Sole Criterion for the Truth", a principle reinforced through a 1978 article that aimed to combat dogmatism and criticized the "Two Whatevers" policy. The new ideology, however, was contested on both sides of the spectrum, by Maoists to the left of the CCP's leadership, as well as by those supporting political liberalization. With other social factors, the conflicts culminated in the 1989 Tiananmen Square protests and massacre. The protests having been crushed and the reformist party general secretary Zhao Ziyang under house arrest, Deng's economic policies resumed and by the early 1990s the concept of a socialist market economy had been introduced. In 1997, Deng's beliefs (officially called "Deng Xiaoping Theory") were embedded into the CCP's constitution.
CCP general secretary Jiang Zemin succeeded Deng as paramount leader in the 1990s and continued most of his policies. In the 1990s, the CCP transformed from a veteran revolutionary leadership that was both leading militarily and politically, to a political elite increasingly renewed according to institutionalized norms in the civil bureaucracy. Leadership was largely selected based on rules and norms on promotion and retirement, educational background, and managerial and technical expertise. There is a largely separate group of professionalized military officers, serving under top CCP leadership largely through formal relationships within institutional channels.
As part of Jiang Zemin's nominal legacy, the CCP ratified the "Three Represents" for the 2003 revision of the party's constitution, as a "guiding ideology" to encourage the party to represent "advanced productive forces, the progressive course of China's culture, and the fundamental interests of the people." The theory legitimized the entry of private business owners and bourgeois elements into the party. Hu Jintao, Jiang Zemin's successor as general secretary, took office in 2002. Unlike Mao, Deng and Jiang Zemin, Hu laid emphasis on collective leadership and opposed one-man dominance of the political system. The insistence on focusing on economic growth led to a wide range of serious social problems. To address these, Hu introduced two main ideological concepts: the "Scientific Outlook on Development" and "Harmonious Society". Hu resigned from his post as CCP general secretary and Chairman of the CMC at the 18th National Congress held in 2012, and was succeeded in both posts by Xi Jinping.
Since taking power, Xi has initiated a wide-reaching anti-corruption campaign, while centralizing powers in the office of CCP general secretary at the expense of the collective leadership of prior decades. Commentators have described the campaign as a defining part of Xi's leadership as well as "the principal reason why he has been able to consolidate his power so quickly and effectively." Xi's leadership has also overseen an increase in the Party's role in China. Xi has added his ideology, named after himself, into the CCP constitution in 2017. Xi's term as general secretary was renewed in 2022.
Since 2014, the CCP has led efforts in Xinjiang that involve the detention of more than 1 million Uyghurs and other ethnic minorities in internment camps, as well as other repressive measures. This has been described as a genocide by academics and some governments. On the other hand, a greater number of countries signed a letter penned to the Human Rights Council supporting the policies as an effort to combat terrorism in the region.
Celebrations of the 100th anniversary of the CCP's founding, one of the Two Centenaries, took place on 1 July 2021. In the sixth plenary session of the 19th Central Committee in November 2021, CCP adopted a resolution on the Party's history. This was the third of its kind after ones adopted by Mao Zedong and Deng Xiaoping, and the document for the first time credited Xi as being the "main innovator" of Xi Jinping Thought while also declaring Xi's leadership as being "the key to the great rejuvenation of the Chinese nation". In comparison with the other historical resolutions, Xi's one did not herald a major change in how the CCP evaluated its history.
On July 6, 2021, Xi chaired the Communist Party of China and World Political Parties Summit, which involved representatives from 500 political parties across 160 countries. Xi urged the participants to oppose "technology blockades," and "developmental decoupling" in order to work towards "building a community with a shared future for mankind."
The core ideology of the party has evolved with each distinct generation of Chinese leadership. As both the CCP and the People's Liberation Army promote their members according to seniority, it is possible to discern distinct generations of Chinese leadership. In official discourse, each group of leadership is identified with a distinct extension of the ideology of the party. Historians have studied various periods in the development of the government of the People's Republic of China by reference to these "generations".
Marxism–Leninism was the first official ideology of the CCP. According to the CCP, "Marxism–Leninism reveals the universal laws governing the development of history of human society." To the CCP, Marxism–Leninism provides a "vision of the contradictions in capitalist society and of the inevitability of a future socialist and communist societies". According to the People's Daily, Mao Zedong Thought "is Marxism–Leninism applied and developed in China". Mao Zedong Thought was conceived not only by Mao Zedong, but by leading party officials.
Deng Xiaoping Theory was added to the party constitution at the 14th National Congress in 1992. The concepts of "socialism with Chinese characteristics" and "the primary stage of socialism" were credited to the theory. Deng Xiaoping Theory can be defined as a belief that state socialism and state planning is not by definition communist, and that market mechanisms are class neutral. In addition, the party needs to react to the changing situation dynamically; to know if a certain policy is obsolete or not, the party had to "seek truth from facts" and follow the slogan "practice is the sole criterion for the truth". At the 14th National Congress, Jiang reiterated Deng's mantra that it was unnecessary to ask if something was socialist or capitalist, since the important factor was whether it worked.
The "Three Represents", Jiang Zemin's contribution to the party's ideology, was adopted by the party at the 16th National Congress. The Three Represents defines the role of the CCP, and stresses that the Party must always represent the requirements for developing China's advanced productive forces, the orientation of China's advanced culture and the fundamental interests of the overwhelming majority of the Chinese people." Certain segments within the CCP criticized the Three Represents as being un-Marxist and a betrayal of basic Marxist values. Supporters viewed it as a further development of socialism with Chinese characteristics. Jiang disagreed, and had concluded that attaining the communist mode of production, as formulated by earlier communists, was more complex than had been realized, and that it was useless to try to force a change in the mode of production, as it had to develop naturally, by following the "economic laws of history." The theory is most notable for allowing capitalists, officially referred to as the "new social strata", to join the party on the grounds that they engaged in "honest labor and work" and through their labour contributed "to build[ing] socialism with Chinese characteristics."
In 2003 the 3rd Plenary Session of the 16th Central Committee conceived and formulated the ideology of the Scientific Outlook on Development (SOD). It is considered to be Hu Jintao's contribution to the official ideological discourse. The SOD incorporates scientific socialism, sustainable development, social welfare, a humanistic society, increased democracy, and, ultimately, the creation of a Socialist Harmonious Society. According to official statements by the CCP, the concept integrates "Marxism with the reality of contemporary China and with the underlying features of our times, and it fully embodies the Marxist worldview on and methodology for development."
Xi Jinping Thought on Socialism with Chinese Characteristics for a New Era, commonly known as Xi Jinping Thought, was added to the party constitution in the 19th National Congress in 2017.
The party combines elements of both socialist patriotism and Chinese nationalism.
Deng did not believe that the fundamental difference between the capitalist mode of production and the socialist mode of production was central planning versus free markets. He said, "A planned economy is not the definition of socialism, because there is planning under capitalism; the market economy happens under socialism, too. Planning and market forces are both ways of controlling economic activity". Jiang Zemin supported Deng's thinking, and stated in a party gathering that it did not matter if a certain mechanism was capitalist or socialist, because the only thing that mattered was whether it worked. It was at this gathering that Jiang Zemin introduced the term socialist market economy, which replaced Chen Yun's "planned socialist market economy". In his report to the 14th National Congress Jiang Zemin told the delegates that the socialist state would "let market forces play a basic role in resource allocation." At the 15th National Congress, the party line was changed to "make market forces further play their role in resource allocation"; this line continued until the 3rd Plenary Session [zh] of the 18th Central Committee, when it was amended to "let market forces play a decisive role in resource allocation." Despite this, the 3rd Plenary Session of the 18th Central Committee upheld the creed "Maintain the dominance of the public sector and strengthen the economic vitality of the state-owned economy."
"... their theory that capitalism is the ultimate [force] has been shaken, and socialist development has experienced a miracle. Western capitalism has suffered reversals, a financial crisis, a credit crisis, a crisis of confidence, and their self-conviction has wavered. Western countries have begun to reflect, and openly or secretively compare themselves against China's politics, economy and path."
— Xi Jinping, on the inevitability of socialism
The CCP views the world as organized into two opposing camps; socialist and capitalist. They insist that socialism, on the basis of historical materialism, will eventually triumph over capitalism. In recent years, when the party has been asked to explain the capitalist globalization occurring, the party has returned to the writings of Karl Marx. Despite admitting that globalization developed through the capitalist system, the party's leaders and theorists argue that globalization is not intrinsically capitalist. The reason being that if globalization was purely capitalist, it would exclude an alternative socialist form of modernity. Globalization, as with the market economy, therefore does not have one specific class character (neither socialist nor capitalist) according to the party. The insistence that globalization is not fixed in nature comes from Deng's insistence that China can pursue socialist modernization by incorporating elements of capitalism. Because of this there is considerable optimism within the CCP that despite the current capitalist dominance of globalization, globalization can be turned into a vehicle supporting socialism.
While foreign analysts generally agree that the CCP has rejected orthodox Marxism–Leninism and Mao Zedong Thought (or at least basic thoughts within orthodox thinking), the CCP itself disagrees. Critics of the CCP argue that Jiang Zemin ended the party's formal commitment to Marxism–Leninism with the introduction of the ideological theory, the Three Represents. However, party theorist Leng Rong disagrees, claiming that "President Jiang rid the Party of the ideological obstacles to different kinds of ownership ... He did not give up Marxism or socialism. He strengthened the Party by providing a modern understanding of Marxism and socialism—which is why we talk about a 'socialist market economy' with Chinese characteristics." The attainment of true "communism" is still described as the CCP's and China's "ultimate goal". While the CCP claims that China is in the primary stage of socialism, party theorists argue that the current development stage "looks a lot like capitalism". Alternatively, certain party theorists argue that "capitalism is the early or first stage of communism." Some have dismissed the concept of a primary stage of socialism as intellectual cynicism. For example, Robert Lawrence Kuhn, a former foreign adviser to the Chinese government, stated: "When I first heard this rationale, I thought it more comic than clever—a wry caricature of hack propagandists leaked by intellectual cynics. But the 100-year horizon comes from serious political theorists."
American political scientist and sinologist David Shambaugh argues that before the "Practice Is the Sole Criterion for the Truth" campaign, the relationship between ideology and decision making was a deductive one, meaning that policy-making was derived from ideological knowledge. However, under Deng's leadership this relationship was turned upside down, with decision making justifying ideology. Chinese policy-makers have described the Soviet Union's state ideology as "rigid, unimaginative, ossified, and disconnected from reality", believing that this was one of the reasons for the dissolution of the Soviet Union. Therefore, Shambaugh argues, Chinese policy-makers believe that their party ideology must be dynamic to safeguard the party's rule.
British sinologist Kerry Brown argues that the CCP does not have an ideology, and that the party organization is pragmatic and interested only in what works. The party itself argues against this assertion. Hu Jintao stated in 2012 that the Western world is "threatening to divide us" and that "the international culture of the West is strong while we are weak ... Ideological and cultural fields are our main targets". As such, the CCP puts a great deal of effort into the party schools and into crafting its ideological message.
Collective leadership, the idea that decisions will be taken through consensus, is the ideal in the CCP. The concept has its origins back to Lenin and the Russian Bolshevik Party. At the level of the central party leadership this means that, for instance, all members of the Politburo Standing Committee are of equal standing (each member having only one vote). A member of the Politburo Standing Committee often represents a sector; during Mao's reign, he controlled the People's Liberation Army, Kang Sheng, the security apparatus, and Zhou Enlai, the State Council and the Ministry of Foreign Affairs. This counts as informal power. Despite this, in a paradoxical relation, members of a body are ranked hierarchically (despite the fact that members are in theory equal to one another). Informally, the collective leadership is headed by a "leadership core"; that is, the paramount leader, the person who holds the offices of CCP general secretary, CMC chairman and PRC president. Before Jiang Zemin's tenure as paramount leader, the party core and collective leadership were indistinguishable. In practice, the core was not responsible to the collective leadership. However, by the time of Jiang, the party had begun propagating a responsibility system, referring to it in official pronouncements as the "core of the collective leadership".
"[Democratic centralism] is centralized on the basis of democracy and democratic under centralized guidance. This is the only system that can give full expression to democracy with full powers vested in the people's congresses at all levels and, at the same time, guarantee centralized administration with the governments at each level ..."
— Mao Zedong, from his speech entitled "Our General Programme"
The CCP's organizational principle is democratic centralism, a principle that entails open discussion of policy on the condition of unity among party members in upholding the agreed-upon decision. It is based on two principles: democracy (synonymous in official discourse with "socialist democracy" and "inner-party democracy") and centralism. This has been the guiding organizational principle of the party since the 5th National Congress, held in 1927. In the words of the party constitution, "The Party is an integral body organized under its program and constitution and on the basis of democratic centralism". Mao once quipped that democratic centralism was "at once democratic and centralized, with the two seeming opposites of democracy and centralization united in a definite form." Mao claimed that the superiority of democratic centralism lay in its internal contradictions, between democracy and centralism, and freedom and discipline. Currently, the CCP is claiming that "democracy is the lifeline of the Party, the lifeline of socialism". But for democracy to be implemented, and functioning properly, there needs to be centralization. The goal of democratic centralism was not to obliterate capitalism or its policies but instead it is the movement towards regulating capitalism while involving socialism and democracy. Democracy in any form, the CCP claims, needs centralism, since without centralism there will be no order.
Shuanggui is an intra-party disciplinary process conducted by the Central Commission for Discipline Inspection (CCDI), which conducts shuanggui on members accused of "disciplinary violations", a charge which generally refers to political corruption. The process, which literally translates to "double regulation", aims to extract confessions from members accused of violating party rules. According to the Dui Hua Foundation, tactics such as cigarette burns, beatings and simulated drowning are among those used to extract confessions. Other reported techniques include the use of induced hallucinations, with one subject of this method reporting that "In the end I was so exhausted, I agreed to all the accusations against me even though they were false."
The CCP employs a political strategy that it terms "united front work" that involves groups and key individuals that are influenced or controlled by the CCP and used to advance its interests. United front work is managed primarily but not exclusively by the United Front Work Department (UFWD). The united front has historically been a popular front that has included eight legally-permitted political parties alongside other people's organizations which have nominal representation in the National People's Congress and the Chinese People's Political Consultative Conference (CPPCC). However, the CPPCC is a body without real power. While consultation does take place, it is supervised and directed by the CCP. Under Xi Jinping, the united front and its targets of influence have expanded in size and scope.
The National Congress is the party's highest body, and, since the 9th National Congress in 1969, has been convened every five years (prior to the 9th Congress they were convened on an irregular basis). According to the party's constitution, a congress may not be postponed except "under extraordinary circumstances." The party constitution gives the National Congress six responsibilities:
In practice, the delegates rarely discuss issues at length at the National Congresses. Most substantive discussion takes place before the congress, in the preparation period, among a group of top party leaders. In between National Congresses, the Central Committee is the highest decision-making institution. The CCDI is responsible for supervising party's internal anti-corruption and ethics system. In between congresses the CCDI is under the authority of the Central Committee.
The Central Committee, as the party's highest decision-making institution between national congresses, elects several bodies to carry out its work. The first plenary session of a newly elected central committee elects the general secretary of the Central Committee, the party's leader; the Central Military Commission (CMC); the Politburo; the Politburo Standing Committee (PSC). The first plenum also endorses the composition of the Secretariat and the leadership of the CCDI. According to the party constitution, the general secretary must be a member of the Politburo Standing Committee (PSC), and is responsible for convening meetings of the PSC and the Politburo, while also presiding over the work of the Secretariat. The Politburo "exercises the functions and powers of the Central Committee when a plenum is not in session". The PSC is the party's highest decision-making institution when the Politburo, the Central Committee and the National Congress are not in session. It convenes at least once a week. It was established at the 8th National Congress, in 1958, to take over the policy-making role formerly assumed by the Secretariat. The Secretariat is the top implementation body of the Central Committee, and can make decisions within the policy framework established by the Politburo; it is also responsible for supervising the work of organizations that report directly into the Central Committee, for example departments, commissions, publications, and so on. The CMC is the highest decision-making institution on military affairs within the party, and controls the operations of the People's Liberation Army. The general secretary has, since Jiang Zemin, also served as Chairman of the CMC. Unlike the collective leadership ideal of other party organs, the CMC chairman acts as commander-in-chief with full authority to appoint or dismiss top military officers at will.
A first plenum of the Central Committee also elects heads of departments, bureaus, central leading groups and other institutions to pursue its work during a term (a "term" being the period elapsing between national congresses, usually five years). The General Office is the party's "nerve centre", in charge of day-to-day administrative work, including communications, protocol, and setting agendas for meetings. The CCP currently has six main central departments: the Organization Department, responsible for overseeing provincial appointments and vetting cadres for future appointments, the Publicity Department (formerly "Propaganda Department"), which oversees the media and formulates the party line to the media, the United Front Work Department, which oversees the country's eight minor parties, people's organizations, and influence groups inside and outside of the country, the International Liaison Department, functioning as the party's "foreign affairs ministry" with other parties, the Social Work Department, which handles work related to civic groups, chambers of commerce and industry groups and mixed-ownership and non-public enterprises, and the Central Political and Legal Affairs Commission, which oversees the country's legal enforcement authorities. The CC also has direct control over the Central Policy Research Office, which is responsible for researching issues of significant interest to the party leadership, the Central Party School, which provides political training and ideological indoctrination in communist thought for high-ranking and rising cadres, the Institution for Party History and Literature Research, which sets priorities for scholarly research in state-run universities and the Central Party School and studies and translates the classical works of Marxism. The party's newspaper, the People's Daily, is under the direct control of the Central Committee and is published with the objectives "to tell good stories about China and the (Party)" and to promote its party leader. The theoretical magazines Seeking Truth from Facts and Study Times are published by the Central Party School. The China Media Group, which oversees China Central Television (CCTV), China National Radio (CNR) and China Radio International (CRI), is under the direct control of the Publicity Department. The various offices of the "Central Leading Groups", such as the Hong Kong and Macau Work Office, the Taiwan Affairs Office, and the Central Finance Office, also report to the central committee during a plenary session. Additionally, CCP has sole control over the People's Liberation Army (PLA) through its Central Military Commission.
After seizing political power, the CCP extended the dual party-state command system to all government institutions, social organizations, and economic entities. The State Council and the Supreme Court each has a party group, established since November 1949. Party committees permeate in every state administrative organ as well as the People's Consultation Conferences and mass organizations at all levels. According to scholar Rush Doshi, "[t]he Party sits above the state, runs parallel to the state, and is enmeshed in every level of the state." Modelled after the Soviet Nomenklatura system, the party committee's organization department at each level has the power to recruit, train, monitor, appoint, and relocate these officials.
Party committees exist at the level of provinces, cities, counties, and neighborhoods. These committees play a key role in directing local policy by selecting local leaders and assigning critical tasks. The Party secretary at each level is more senior than that of the leader of the government, with the CCP standing committee being the main source of power. Party committee members in each level are selected by the leadership in the level above, with provincial leaders selected by the central Organizational Department, and not removable by the local party secretary. Neighborhood committees are generally composed of older volunteers.
CCP committees exist inside of companies, both private and state-owned. A business that has more than three party members is legally required to establish a committee or branch. As of 2021, more than half of China's private firms have such organizations. These branches provide places for new member socialization and host morale boosting events for existing members. They also provide mechanisms that help private firm interface with government bodies and learn about policies which relate to their fields. On average, the profitability of private firms with a CCP branch is 12.6 percent higher than the profitability of private firms.
Within state-owned enterprises, these branches are governing bodies that make important decisions and inculcate CCP ideology in employees. Party committees or branches within companies also provide various benefits to employees. These may include bonuses, interest-free loans, mentorship programs, and free medical and other services for those in need. Enterprises that have party branches generally provide more expansive benefits for employees in the areas of retirement, medical care, unemployment, injury, and birth and fertility. Increasingly, the CCP is requiring private companies to revise their charters to include the role of the party.
The funding of all CCP organizations mainly comes from state fiscal revenue. Data for the proportion of total CCP organizations’ expenditures in total China fiscal revenue is unavailable.
"It is my will to join the Communist Party of China, uphold the Party's program, observe the provisions of the Party constitution, fulfill a Party member's duties, carry out the Party's decisions, strictly observe Party discipline, guard Party secrets, be loyal to the Party, work hard, fight for communism throughout my life, be ready at all times to sacrifice my all for the Party and the people, and never betray the Party."
— Chinese Communist Party Admission Oath
The CCP reached 98.04 million members at the end of 2022, a net increase of 1.3 million over the previous year. It is the second largest political party in the world after India's Bharatiya Janata Party.
To join the CCP, an applicant must go through an approval process. Adults can file applications for membership with their local party branch. A prescreening process, akin to a background check, follows. Next, established party members at the local branch vet applicants' behavior and political attitudes and may make a formal inquiry to a party branch near the applicants' parents residence to vet family loyalty to communism and the party. In 2014, only 2 million applications were accepted out of some 22 million applicants. Admitted members then spend a year as a probationary member. Probationary members are typically accepted into the party.
In contrast to the past, when emphasis was placed on the applicants' ideological criteria, the current CCP stresses technical and educational qualifications. To become a probationary member, the applicant must take an admission oath before the party flag. The relevant CCP organization is responsible for observing and educating probationary members. Probationary members have duties similar to those of full members, with the exception that they may not vote in party elections nor stand for election. Many join the CCP through the Communist Youth League. Under Jiang Zemin, private entrepreneurs were allowed to become party members.
As of December 2022, individuals who identify as farmers, herdsmen and fishermen make up 26 million members; members identifying as workers totalled 6.7 million. Another group, the "Managing, professional and technical staff in enterprises and public institutions", made up 15.9 million, 11.3 million identified as working in administrative staff and 7.8 million described themselves as party cadres. By 2022, CCP membership had become more educated, younger, and less blue-collar than previously, with 54.7% of party members having a college degree or above. As of 2022, around 30 to 35 percent of Chinese entrepreneurs are or have been a party member. At the end of 2022, the CCP stated that it has approximately 7.46 million ethnic minority members or 7.6% of the party.
As of 2023, 29.30 million women are CCP members, representing 29.9% of the party. Women in China have low participation rates as political leaders. Women's disadvantage is most evident in their severe under representation in the more powerful political positions. At the top level of decision making, no woman has ever been among the members of the Politburo Standing Committee, while the broader Politburo currently does not have any female members. Just 3 of 27 government ministers are women, and importantly, since 1997, China has fallen to 53rd place from 16th in the world in terms of female representation in the National People's Congress, according to the Inter-Parliamentary Union. CCP leaders such as Zhao Ziyang have vigorously opposed the participation of women in the political process. Within the party women face a glass ceiling.
A 2019 Binghamton University study found that CCP members gain a 20% wage premium in the market over non-members. A subsequent academic study found that the economic benefit of CCP membership is strongest on those in lower wealth brackets.
The Communist Youth League (CYL) is the CCP's youth wing, and the largest mass organization for youth in China. To join, an applicant has to be between the ages of 14 and 28. It controls and supervises Young Pioneers, a youth organization for children below the age of 14. The organizational structure of CYL is an exact copy of the CCP's; the highest body is the National Congress, followed by the Central Committee [zh], Politburo and the Politburo Standing Committee. However, the Central Committee (and all central organs) of the CYL work under the guidance of the CCP central leadership. 2021 estimates put the number of CYL members at over 81 million.
At the beginning of its history, the CCP did not have a single official standard for the flag, but instead allowed individual party committees to copy the flag of the Communist Party of the Soviet Union. The Central Politburo decreed the establishment of a sole official flag on 28 April 1942: "The flag of the CPC has the length-to-width proportion of 3:2 with a hammer and sickle in the upper-left corner, and with no five-pointed star. The Political Bureau authorizes the General Office to custom-make a number of standard flags and distribute them to all major organs".
According to People's Daily, "The red color symbolizes revolution; the hammer-and-sickle are tools of workers and peasants, meaning that the Communist Party of China represents the interests of the masses and the people; the yellow color signifies brightness."
The International Liaison Department of the CCP is responsible for dialogue with global political parties.
The CCP continues to have relations with non-ruling communist and workers' parties and attends international communist conferences, most notably the International Meeting of Communist and Workers' Parties. While the CCP retains contact with major parties such as the Communist Party of Portugal, the Communist Party of France, the Communist Party of the Russian Federation, the Communist Party of Bohemia and Moravia, the Communist Party of Brazil, the Communist Party of Greece, the Communist Party of Nepal and the Communist Party of Spain, the party also retains relations with minor communist and workers' parties, such as the Communist Party of Australia, the Workers Party of Bangladesh, the Communist Party of Bangladesh (Marxist–Leninist) (Barua), the Communist Party of Sri Lanka, the Workers' Party of Belgium, the Hungarian Workers' Party, the Dominican Workers' Party, the Nepal Workers Peasants Party, and the Party for the Transformation of Honduras, for instance. In recent years, noting the self-reform of the European social democratic movement in the 1980s and 1990s, the CCP "has noted the increased marginalization of West European communist parties."
The CCP has retained close relations with the ruling parties of socialist states still espousing communism: Cuba, Laos, North Korea, and Vietnam. It spends a fair amount of time analysing the situation in the remaining socialist states, trying to reach conclusions as to why these states survived when so many did not, following the collapse of the Eastern European socialist states in 1989 and the dissolution of the Soviet Union in 1991. In general, the analyses of the remaining socialist states and their chances of survival have been positive, and the CCP believes that the socialist movement will be revitalized sometime in the future.
The ruling party which the CCP is most interested in is the Communist Party of Vietnam (CPV). In general the CPV is considered a model example of socialist development in the post-Soviet era. Chinese analysts on Vietnam believe that the introduction of the Đổi Mới reform policy at the 6th CPV National Congress is the key reason for Vietnam's current success.
While the CCP is probably the organization with most access to North Korea, writing about North Korea is tightly circumscribed. The few reports accessible to the general public are those about North Korean economic reforms. While Chinese analysts of North Korea tend to speak positively of North Korea in public, in official discussions c. 2008 they show much disdain for North Korea's economic system, the cult of personality which pervades society, the Kim family, the idea of hereditary succession in a socialist state, the security state, the use of scarce resources on the Korean People's Army and the general impoverishment of the North Korean people. Circa 2008, there are those analysts who compare the current situation of North Korea with that of China during the Cultural Revolution. Over the years, the CCP has tried to persuade the Workers' Party of Korea (or WPK, North Korea's ruling party) to introduce economic reforms by showing them key economic infrastructure in China. For instance, in 2006 the CCP invited then-WPK general secretary Kim Jong Il to Guangdong to showcase the success economic reforms had brought China. In general, the CCP considers the WPK and North Korea to be negative examples of a ruling communist party and socialist state.
There is a considerable degree of interest in Cuba within the CCP. Fidel Castro, the former First Secretary of the Communist Party of Cuba (PCC), is greatly admired, and books have been written focusing on the successes of the Cuban Revolution. Communication between the CCP and the PCC has increased since the 1990s. At the 4th Plenary Session of the 16th Central Committee, which discussed the possibility of the CCP learning from other ruling parties, praise was heaped on the PCC. When Wu Guanzheng, a Central Politburo member, met with Fidel Castro in 2007, he gave him a personal letter written by Hu Jintao: "Facts have shown that China and Cuba are trustworthy good friends, good comrades, and good brothers who treat each other with sincerity. The two countries' friendship has withstood the test of a changeable international situation, and the friendship has been further strengthened and consolidated."
Since the decline and fall of communism in Eastern Europe, the CCP has begun establishing party-to-party relations with non-communist parties. These relations are sought so that the CCP can learn from them. For instance, the CCP has been eager to understand how the People's Action Party of Singapore (PAP) maintains its total domination over Singaporean politics through its "low-key presence, but total control." According to the CCP's own analysis of Singapore, the PAP's dominance can be explained by its "well-developed social network, which controls constituencies effectively by extending its tentacles deeply into society through branches of government and party-controlled groups." While the CCP accepts that Singapore is a liberal democracy, they view it as a guided democracy led by the PAP. Other differences are, according to the CCP, "that it is not a political party based on the working class—instead it is a political party of the elite. ... It is also a political party of the parliamentary system, not a revolutionary party." Other parties which the CCP studies and maintains strong party-to-party relations with are the United Malays National Organization, which has ruled Malaysia (1957–2018, 2020–2022), and the Liberal Democratic Party in Japan, which dominated Japanese politics since 1955.
Since Jiang Zemin's time, the CCP has made friendly overtures to its erstwhile foe, the Kuomintang. The CCP emphasizes strong party-to-party relations with the KMT so as to strengthen the probability of the reunification of Taiwan with mainland China. However, several studies have been written on the KMT's loss of power in 2000 after having ruled Taiwan since 1949 (the KMT officially ruled mainland China from 1928 to 1949). In general, one-party states or dominant-party states are of special interest to the party and party-to-party relations are formed so that the CCP can study them. The longevity of the Syrian Regional Branch of the Arab Socialist Ba'ath Party is attributed to the personalization of power in the al-Assad family, the strong presidential system, the inheritance of power, which passed from Hafez al-Assad to his son Bashar al-Assad, and the role given to the Syrian military in politics.
Circa 2008, the CCP has been especially interested in Latin America, as shown by the increasing number of delegates sent to and received from these countries. Of special fascination for the CCP is the 71-year-long rule of the Institutional Revolutionary Party (PRI) in Mexico. While the CCP attributed the PRI's long reign in power to the strong presidential system, tapping into the machismo culture of the country, its nationalist posture, its close identification with the rural populace and the implementation of nationalization alongside the marketization of the economy, the CCP concluded that the PRI failed because of the lack of inner-party democracy, its pursuit of social democracy, its rigid party structures that could not be reformed, its political corruption, the pressure of globalization, and American interference in Mexican politics. While the CCP was slow to recognize the pink tide in Latin America, it has strengthened party-to-party relations with several socialist and anti-American political parties over the years. The CCP has occasionally expressed some irritation over Hugo Chávez's anti-capitalist and anti-American rhetoric. Despite this, the CCP reached an agreement in 2013 with the United Socialist Party of Venezuela (PSUV), which was founded by Chávez, for the CCP to educate PSUV cadres in political and social fields. By 2008, the CCP claimed to have established relations with 99 political parties in 29 Latin American countries.
Social democratic movements in Europe have been of great interest to the CCP since the early 1980s. With the exception of a short period in which the CCP forged party-to-party relations with far-right parties during the 1970s in an effort to halt "Soviet expansionism", the CCP's relations with European social democratic parties were its first serious efforts to establish cordial party-to-party relations with non-communist parties. The CCP credits the European social democrats with creating a "capitalism with a human face". Before the 1980s, the CCP had a highly negative and dismissive view of social democracy, a view dating back to the Second International and the Marxist–Leninist view on the social democratic movement. By the 1980s, that view had changed and the CCP concluded that it could actually learn something from the social democratic movement. CCP delegates were sent all over Europe to observe. By the 1980s, most European social democratic parties were facing electoral decline and in a period of self-reform. The CCP followed this with great interest, laying most weight on reform efforts within the British Labour Party and the Social Democratic Party of Germany. The CCP concluded that both parties were re-elected because they modernized, replacing traditional state socialist tenets with new ones supporting privatization, shedding the belief in big government, conceiving a new view of the welfare state, changing their negative views of the market and moving from their traditional support base of trade unions to entrepreneurs, the young and students. | [
{
"paragraph_id": 0,
"text": "The Chinese Communist Party (CCP), officially the Communist Party of China (CPC), is the founding and sole ruling party of the People's Republic of China (PRC). Under the leadership of Mao Zedong, the CCP emerged victorious in the Chinese Civil War against the Kuomintang. In 1949, Mao proclaimed the establishment of the People's Republic of China. Since then, the CCP has governed China and has had sole control over the People's Liberation Army (PLA). Successive leaders of the CCP have added their own theories to the party's constitution, which outlines the party's ideology, collectively referred to as socialism with Chinese characteristics. As of 2023, the CCP has more than 98 million members, making it the second largest political party by membership in the world after India's Bharatiya Janata Party.",
"title": ""
},
{
"paragraph_id": 1,
"text": "In 1921, Chen Duxiu and Li Dazhao led the founding of the CCP with the help of the Far Eastern Bureau of the Communist Party of the Soviet Union and Far Eastern Bureau of the Communist International. For the first six years of its history, the CCP aligned itself with the Kuomintang (KMT) as the organized left wing of the larger nationalist movement. However, when the right wing of the KMT, led by Chiang Kai-shek, turned on the CCP and massacred tens of thousands of the party's members, the two parties split and began a prolonged civil war. During the next ten years of guerrilla warfare, Mao Zedong rose to become the most influential figure in the CCP, and the party established a strong base among the rural peasantry with its land reform policies. Support for the CCP continued to grow throughout the Second Sino-Japanese War, and after the Japanese surrender in 1945, the CCP emerged triumphant in the communist revolution against the Nationalist government. After the KMT's retreat to Taiwan, the CCP established the People's Republic of China on 1 October 1949.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Mao Zedong continued to be the most influential member of the CCP until his death in 1976, although he periodically withdrew from public leadership as his health deteriorated. Under Mao, the party completed its land reform program, launched a series of five-year plans, and eventually split with the Soviet Union. Although Mao attempted to purge the party of capitalist and reactionary elements during the Cultural Revolution, after his death, these policies were only briefly continued by the Gang of Four before a less radical faction seized control. During the 1980s, Deng Xiaoping directed the CCP away from Maoist orthodoxy and towards a policy of economic liberalization. The official explanation for these reforms was that China was still in the primary stage of socialism, a developmental stage similar to the capitalist mode of production. Since the collapse of the Eastern Bloc and the dissolution of the Soviet Union in 1991, the CCP has focused on maintaining its relations with the ruling parties of the remaining socialist states and continues to participate in the International Meeting of Communist and Workers' Parties each year. The CCP has also established relations with several non-communist parties, including dominant nationalist parties of many developing countries in Africa, Asia and Latin America, as well as social democratic parties in Europe.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The Chinese Communist Party is organized based on democratic centralism, a principle that entails open policy discussion on the condition of unity among party members in upholding the agreed-upon decision. The highest body of the CCP is the National Congress, convened every fifth year. When the National Congress is not in session, the Central Committee is the highest body, but since that body usually only meets once a year, most duties and responsibilities are vested in the Politburo and its Standing Committee. Members of the latter are seen as the top leadership of the party and the state. Today the party's leader holds the offices of general secretary (responsible for civilian party duties), Chairman of the Central Military Commission (CMC) (responsible for military affairs), and State President (a largely ceremonial position). Because of these posts, the party leader is seen as the country's paramount leader. The current leader is Xi Jinping, who was elected at the 1st Plenary Session of the 18th Central Committee held on 15 November 2012 and has been reelected twice, on 25 October 2017 by the 19th Central Committee and on 10 October 2022 by the 20th Central Committee.",
"title": ""
},
{
"paragraph_id": 4,
"text": "The CCP traces its origins to the May Fourth Movement of 1919, during which radical Western ideologies like Marxism and anarchism gained traction among Chinese intellectuals. Other influences stemming from the October Revolution and Marxist theory inspired the CCP. Chen Duxiu and Li Dazhao were among the first to publicly support Leninism and world revolution. Both regarded the October Revolution in Russia as groundbreaking, believing it to herald a new era for oppressed countries everywhere. Study circles were, according to Cai Hesen, \"the rudiments [of our party]\". Several study circles were established during the New Culture Movement, but by 1920 many grew skeptical about their ability to bring about reforms.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "The CCP was founded on 1 July 1921 with the help of the Far Eastern Bureau of the Communist Party of the Soviet Union and Far Eastern Secretariat of the Communist International, according to the party's official account of its history. However, party documents suggest that the party's actual founding date was 23 July 1921, the first day of the 1st National Congress of the CCP. The founding National Congress of the CCP was held 23–31 July 1921. With only 50 members in the beginning of 1921, among them Chen Duxiu, Li Dazhao and Mao Zedong, the CCP organization and authorities grew tremendously. While it was originally held in a house in the Shanghai French Concession, French police interrupted the meeting on 30 July and the congress was moved to a tourist boat on South Lake in Jiaxing, Zhejiang province. A dozen delegates attended the congress, with neither Li nor Chen being able to attend, the latter sending a personal representative in his stead. The resolutions of the congress called for the establishment of a communist party as a branch of the Communist International (Comintern) and elected Chen as its leader. Chen then served as the first general secretary of the CCP and was referred to as \"China's Lenin\".",
"title": "History"
},
{
"paragraph_id": 6,
"text": "The Soviets hoped to foster pro-Soviet forces in East Asia to fight against anti-communist countries, particularly Japan. They attempted to contact the warlord Wu Peifu but failed. The Soviets then contacted the Kuomintang (KMT), which was leading the Guangzhou government parallel to the Beiyang government. On 6 October 1923, the Comintern sent Mikhail Borodin to Guangzhou, and the Soviets established friendly relations with the KMT. The Central Committee of the CCP, Soviet leader Joseph Stalin, and the Comintern all hoped that the CCP would eventually control the KMT and called their opponents \"rightists\". KMT leader Sun Yat-sen eased the conflict between the communists and their opponents. CCP membership grew tremendously after the 4th congress in 1925, from 900 to 2,428. The CCP still treats Sun Yat-sen as one of the founders of their movement and claim descent from him as he is viewed as a proto-communist and the economic element of Sun's ideology was socialism. Sun stated, \"Our Principle of Livelihood is a form of communism\".",
"title": "History"
},
{
"paragraph_id": 7,
"text": "The communists dominated the left wing of the KMT and struggled for power with the party's right-wing factions. When Sun Yat-sen died in March 1925, he was succeeded by a rightist, Chiang Kai-shek, who initiated moves to marginalize the position of the communists. Chiang, Sun's former assistant, was not actively anti-communist at that time, even though he hated the theory of class struggle and the CCP's seizure of power. The communists proposed removing Chiang's power. When Chiang gradually gained the support of Western countries, the conflict between him and the communists became more and more intense. Chiang asked the Kuomintang to join the Comintern to rule out the secret expansion of communists within the KMT, while Chen Duxiu hoped that the communists would completely withdraw from the KMT.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "In April 1927, both Chiang and the CCP were preparing for conflict. Fresh from the success of the Northern Expedition to overthrow the warlords, Chiang Kai-shek turned on the communists, who by now numbered in the tens of thousands across China. Ignoring the orders of the Wuhan-based KMT government, he marched on Shanghai, a city controlled by communist militias. Although the communists welcomed Chiang's arrival, he turned on them, massacring 5,000 with the aid of the Green Gang. Chiang's army then marched on Wuhan but was prevented from taking the city by CCP General Ye Ting and his troops. Chiang's allies also attacked communists; for example, in Beijing, Li Dazhao and 19 other leading communists were executed by Zhang Zuolin. Angered by these events, the peasant movement supported by the CCP became more violent. Ye Dehui, a famous scholar, was killed by communists in Changsha, and in revenge, KMT general He Jian and his troops gunned down hundreds of peasant militiamen. That May, tens of thousands of communists and their sympathizers were killed by KMT troops, with the CCP losing approximately 15,000 of its 25,000 members.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "The CCP continued supporting the Wuhan KMT government, but on 15 July 1927 the Wuhan government expelled all communists from the KMT. The CCP reacted by founding the Workers' and Peasants' Red Army of China, better known as the \"Red Army\", to battle the KMT. A battalion led by General Zhu De was ordered to take the city of Nanchang on 1 August 1927 in what became known as the Nanchang uprising. Initially successful, Zhu and his troops were forced to retreat after five days, marching south to Shantou, and from there being driven into the wilderness of Fujian. Mao Zedong was appointed commander-in-chief of the Red Army, and led four regiments against Changsha in the Autumn Harvest Uprising, hoping to spark peasant uprisings across Hunan. His plan was to attack the KMT-held city from three directions on 9 September, but the Fourth Regiment deserted to the KMT cause, attacking the Third Regiment. Mao's army made it to Changsha but could not take it; by 15 September, he accepted defeat, with 1,000 survivors marching east to the Jinggang Mountains of Jiangxi.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "The near destruction of the CCP's urban organizational apparatus led to institutional changes within the party. The party adopted democratic centralism, a way to organize revolutionary parties, and established a politburo to function as the standing committee of the central committee. The result was increased centralization of power within the party. At every level of the party this was duplicated, with standing committees now in effective control. After being expelled from the party, Chen Duxiu went on to lead China's Trotskyist movement. Li Lisan was able to assume de facto control of the party organization by 1929–1930. Li's leadership was a failure, leaving the CCP on the brink of destruction. The Comintern became involved, and by late 1930, his powers had been taken away. By 1935 Mao had become a member of Politburo Standing Committee of the CCP and the party's informal military leader, with Zhou Enlai and Zhang Wentian, the formal head of the party, serving as his informal deputies. The conflict with the KMT led to the reorganization of the Red Army, with power now centralized in the leadership through the creation of CCP political departments charged with supervising the army.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "The Xi'an Incident of December 1936 paused the conflict between the CCP and the KMT. Under pressure from Marshal Zhang Xueliang and the CCP, Chiang Kai-shek finally agreed to a Second United Front focused on repelling the Japanese invaders. While the front formally existed until 1945, all collaboration between the two parties had effectively ended by 1940. Despite their formal alliance, the CCP used the opportunity to expand and carve out independent bases of operations to prepare for the coming war with the KMT. In 1939 the KMT began to restrict CCP expansion within China. This led to frequent clashes between CCP and KMT forces which subsided rapidly on the realisation on both sides that civil war amidst a foreign invasion was not an option. By 1943, the CCP was again actively expanding its territory at the expense of the KMT.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "Mao Zedong became the Chairman of the CCP in 1945. After the Japanese surrender in 1945, the war between the CCP and the KMT began again in earnest. The 1945–49 period had four stages; the first was from August 1945 (when the Japanese surrendered) to June 1946 (when the peace talks between the CCP and the KMT ended). By 1945, the KMT had three times more soldiers under its command than the CCP and initially appeared to be prevailing. With the cooperation of the U.S. and Japan, the KMT was able to retake major parts of the country. However, KMT rule over the reconquered territories proved unpopular because of its endemic political corruption. Notwithstanding its numerical superiority, the KMT failed to reconquer the rural territories which made up the CCP's stronghold. Around the same time, the CCP launched an invasion of Manchuria, where they were assisted by the Soviet Union. The second stage, lasting from July 1946 to June 1947, saw the KMT extend its control over major cities such as Yan'an, the CCP headquarters, for much of the war. The KMT's successes were hollow; the CCP had tactically withdrawn from the cities, and instead undermined KMT rule there by instigating protests amongst students and intellectuals. The KMT responded to these demonstrations with heavy-handed repression. In the meantime, the KMT was struggling with factional infighting and Chiang Kai-shek's autocratic control over the party, which weakened its ability to respond to attacks. The third stage, lasting from July 1947 to August 1948, saw a limited counteroffensive by the CCP. The objective was clearing \"Central China, strengthening North China, and recovering Northeast China.\" This operation, coupled with military desertions from the KMT, resulted in the KMT losing 2 million of its 3 million troops by the spring of 1948, and saw a significant decline in support for KMT rule. The CCP was consequently able to cut off KMT garrisons in Manchuria and retake several territories. The last stage, lasting from September 1948 to December 1949, saw the communists go on the offensive and the collapse of KMT rule in mainland China as a whole. Mao's proclamation of the founding of the People's Republic of China on 1 October 1949 marked the end of the second phase of the Chinese Civil War (or the Chinese Communist Revolution, as it is called by the CCP).",
"title": "History"
},
{
"paragraph_id": 13,
"text": "Mao proclaimed the founding of the People's Republic of China (PRC) before a massive crowd at Tiananmen Square on 1 October 1949. The CCP headed the Central People's Government. From this time through the 1980s, top leaders of the CCP (such as Mao Zedong, Lin Biao, Zhou Enlai and Deng Xiaoping) were largely the same military leaders prior to the PRC's founding. As a result, informal personal ties between political and military leaders dominated civil-military relations.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "Stalin proposed a one-party constitution when Liu Shaoqi visited the Soviet Union in 1952. The constitution of the PRC in 1954 subsequently abolished the previous coalition government and established the CCP's one-party system. In 1957, the CCP launched the Anti-Rightist Campaign against political dissidents and prominent figures from minor parties, which resulted in the political persecution of at least 550,000 people. The campaign significantly damaged the limited pluralistic nature in the socialist republic and solidified the country's status as a de facto one-party state.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "The Anti-Rightist Campaign led to the catastrophic results of the Second Five Year Plan from 1958 to 1962, known as the Great Leap Forward. In an effort to transform the country from an agrarian economy into an industrialized one, the CCP collectivized farmland, formed people's communes, and diverted labor to factories. General mismanagement and exaggerations of harvests by CCP officials led to the Great Chinese Famine, which resulted in an estimated 15 to 45 million deaths, making it the largest famine in recorded history.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "During the 1960s and 1970s, the CCP experienced a significant ideological separation from the Communist Party of the Soviet Union which was going through a period of \"de-Stalinization\" under Nikita Khrushchev. By that time, Mao had begun saying that the \"continued revolution under the dictatorship of the proletariat\" stipulated that class enemies continued to exist even though the socialist revolution seemed to be complete, leading to the Cultural Revolution in which millions were persecuted and killed. During the Cultural Revolution, party leaders such as Liu Shaoqi, Deng Xiaoping, Peng Dehuai, and He Long were purged or exiled, and the Gang of Four, led by Mao's wife Jiang Qing, emerged to fill in the power vacuum left behind.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "Following Mao's death in 1976, a power struggle between CCP chairman Hua Guofeng and vice-chairman Deng Xiaoping erupted. Deng won the struggle, and became China's paramount leader in 1978. Deng, alongside Hu Yaobang and Zhao Ziyang, spearheaded the \"reform and opening-up\" policies, and introduced the ideological concept of socialism with Chinese characteristics, opening China to the world's markets. In reversing some of Mao's \"leftist\" policies, Deng argued that a socialist state could use the market economy without itself being capitalist. While asserting the political power of the CCP, the change in policy generated significant economic growth. This was justified on the basis that \"Practice is the Sole Criterion for the Truth\", a principle reinforced through a 1978 article that aimed to combat dogmatism and criticized the \"Two Whatevers\" policy. The new ideology, however, was contested on both sides of the spectrum, by Maoists to the left of the CCP's leadership, as well as by those supporting political liberalization. With other social factors, the conflicts culminated in the 1989 Tiananmen Square protests and massacre. The protests having been crushed and the reformist party general secretary Zhao Ziyang under house arrest, Deng's economic policies resumed and by the early 1990s the concept of a socialist market economy had been introduced. In 1997, Deng's beliefs (officially called \"Deng Xiaoping Theory\") were embedded into the CCP's constitution.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "CCP general secretary Jiang Zemin succeeded Deng as paramount leader in the 1990s and continued most of his policies. In the 1990s, the CCP transformed from a veteran revolutionary leadership that was both leading militarily and politically, to a political elite increasingly renewed according to institutionalized norms in the civil bureaucracy. Leadership was largely selected based on rules and norms on promotion and retirement, educational background, and managerial and technical expertise. There is a largely separate group of professionalized military officers, serving under top CCP leadership largely through formal relationships within institutional channels.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "As part of Jiang Zemin's nominal legacy, the CCP ratified the \"Three Represents\" for the 2003 revision of the party's constitution, as a \"guiding ideology\" to encourage the party to represent \"advanced productive forces, the progressive course of China's culture, and the fundamental interests of the people.\" The theory legitimized the entry of private business owners and bourgeois elements into the party. Hu Jintao, Jiang Zemin's successor as general secretary, took office in 2002. Unlike Mao, Deng and Jiang Zemin, Hu laid emphasis on collective leadership and opposed one-man dominance of the political system. The insistence on focusing on economic growth led to a wide range of serious social problems. To address these, Hu introduced two main ideological concepts: the \"Scientific Outlook on Development\" and \"Harmonious Society\". Hu resigned from his post as CCP general secretary and Chairman of the CMC at the 18th National Congress held in 2012, and was succeeded in both posts by Xi Jinping.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "Since taking power, Xi has initiated a wide-reaching anti-corruption campaign, while centralizing powers in the office of CCP general secretary at the expense of the collective leadership of prior decades. Commentators have described the campaign as a defining part of Xi's leadership as well as \"the principal reason why he has been able to consolidate his power so quickly and effectively.\" Xi's leadership has also overseen an increase in the Party's role in China. Xi has added his ideology, named after himself, into the CCP constitution in 2017. Xi's term as general secretary was renewed in 2022.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "Since 2014, the CCP has led efforts in Xinjiang that involve the detention of more than 1 million Uyghurs and other ethnic minorities in internment camps, as well as other repressive measures. This has been described as a genocide by academics and some governments. On the other hand, a greater number of countries signed a letter penned to the Human Rights Council supporting the policies as an effort to combat terrorism in the region.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "Celebrations of the 100th anniversary of the CCP's founding, one of the Two Centenaries, took place on 1 July 2021. In the sixth plenary session of the 19th Central Committee in November 2021, CCP adopted a resolution on the Party's history. This was the third of its kind after ones adopted by Mao Zedong and Deng Xiaoping, and the document for the first time credited Xi as being the \"main innovator\" of Xi Jinping Thought while also declaring Xi's leadership as being \"the key to the great rejuvenation of the Chinese nation\". In comparison with the other historical resolutions, Xi's one did not herald a major change in how the CCP evaluated its history.",
"title": "History"
},
{
"paragraph_id": 23,
"text": "On July 6, 2021, Xi chaired the Communist Party of China and World Political Parties Summit, which involved representatives from 500 political parties across 160 countries. Xi urged the participants to oppose \"technology blockades,\" and \"developmental decoupling\" in order to work towards \"building a community with a shared future for mankind.\"",
"title": "History"
},
{
"paragraph_id": 24,
"text": "The core ideology of the party has evolved with each distinct generation of Chinese leadership. As both the CCP and the People's Liberation Army promote their members according to seniority, it is possible to discern distinct generations of Chinese leadership. In official discourse, each group of leadership is identified with a distinct extension of the ideology of the party. Historians have studied various periods in the development of the government of the People's Republic of China by reference to these \"generations\".",
"title": "Ideology"
},
{
"paragraph_id": 25,
"text": "Marxism–Leninism was the first official ideology of the CCP. According to the CCP, \"Marxism–Leninism reveals the universal laws governing the development of history of human society.\" To the CCP, Marxism–Leninism provides a \"vision of the contradictions in capitalist society and of the inevitability of a future socialist and communist societies\". According to the People's Daily, Mao Zedong Thought \"is Marxism–Leninism applied and developed in China\". Mao Zedong Thought was conceived not only by Mao Zedong, but by leading party officials.",
"title": "Ideology"
},
{
"paragraph_id": 26,
"text": "Deng Xiaoping Theory was added to the party constitution at the 14th National Congress in 1992. The concepts of \"socialism with Chinese characteristics\" and \"the primary stage of socialism\" were credited to the theory. Deng Xiaoping Theory can be defined as a belief that state socialism and state planning is not by definition communist, and that market mechanisms are class neutral. In addition, the party needs to react to the changing situation dynamically; to know if a certain policy is obsolete or not, the party had to \"seek truth from facts\" and follow the slogan \"practice is the sole criterion for the truth\". At the 14th National Congress, Jiang reiterated Deng's mantra that it was unnecessary to ask if something was socialist or capitalist, since the important factor was whether it worked.",
"title": "Ideology"
},
{
"paragraph_id": 27,
"text": "The \"Three Represents\", Jiang Zemin's contribution to the party's ideology, was adopted by the party at the 16th National Congress. The Three Represents defines the role of the CCP, and stresses that the Party must always represent the requirements for developing China's advanced productive forces, the orientation of China's advanced culture and the fundamental interests of the overwhelming majority of the Chinese people.\" Certain segments within the CCP criticized the Three Represents as being un-Marxist and a betrayal of basic Marxist values. Supporters viewed it as a further development of socialism with Chinese characteristics. Jiang disagreed, and had concluded that attaining the communist mode of production, as formulated by earlier communists, was more complex than had been realized, and that it was useless to try to force a change in the mode of production, as it had to develop naturally, by following the \"economic laws of history.\" The theory is most notable for allowing capitalists, officially referred to as the \"new social strata\", to join the party on the grounds that they engaged in \"honest labor and work\" and through their labour contributed \"to build[ing] socialism with Chinese characteristics.\"",
"title": "Ideology"
},
{
"paragraph_id": 28,
"text": "In 2003 the 3rd Plenary Session of the 16th Central Committee conceived and formulated the ideology of the Scientific Outlook on Development (SOD). It is considered to be Hu Jintao's contribution to the official ideological discourse. The SOD incorporates scientific socialism, sustainable development, social welfare, a humanistic society, increased democracy, and, ultimately, the creation of a Socialist Harmonious Society. According to official statements by the CCP, the concept integrates \"Marxism with the reality of contemporary China and with the underlying features of our times, and it fully embodies the Marxist worldview on and methodology for development.\"",
"title": "Ideology"
},
{
"paragraph_id": 29,
"text": "Xi Jinping Thought on Socialism with Chinese Characteristics for a New Era, commonly known as Xi Jinping Thought, was added to the party constitution in the 19th National Congress in 2017.",
"title": "Ideology"
},
{
"paragraph_id": 30,
"text": "The party combines elements of both socialist patriotism and Chinese nationalism.",
"title": "Ideology"
},
{
"paragraph_id": 31,
"text": "Deng did not believe that the fundamental difference between the capitalist mode of production and the socialist mode of production was central planning versus free markets. He said, \"A planned economy is not the definition of socialism, because there is planning under capitalism; the market economy happens under socialism, too. Planning and market forces are both ways of controlling economic activity\". Jiang Zemin supported Deng's thinking, and stated in a party gathering that it did not matter if a certain mechanism was capitalist or socialist, because the only thing that mattered was whether it worked. It was at this gathering that Jiang Zemin introduced the term socialist market economy, which replaced Chen Yun's \"planned socialist market economy\". In his report to the 14th National Congress Jiang Zemin told the delegates that the socialist state would \"let market forces play a basic role in resource allocation.\" At the 15th National Congress, the party line was changed to \"make market forces further play their role in resource allocation\"; this line continued until the 3rd Plenary Session [zh] of the 18th Central Committee, when it was amended to \"let market forces play a decisive role in resource allocation.\" Despite this, the 3rd Plenary Session of the 18th Central Committee upheld the creed \"Maintain the dominance of the public sector and strengthen the economic vitality of the state-owned economy.\"",
"title": "Ideology"
},
{
"paragraph_id": 32,
"text": "\"... their theory that capitalism is the ultimate [force] has been shaken, and socialist development has experienced a miracle. Western capitalism has suffered reversals, a financial crisis, a credit crisis, a crisis of confidence, and their self-conviction has wavered. Western countries have begun to reflect, and openly or secretively compare themselves against China's politics, economy and path.\"",
"title": "Ideology"
},
{
"paragraph_id": 33,
"text": "— Xi Jinping, on the inevitability of socialism",
"title": "Ideology"
},
{
"paragraph_id": 34,
"text": "The CCP views the world as organized into two opposing camps; socialist and capitalist. They insist that socialism, on the basis of historical materialism, will eventually triumph over capitalism. In recent years, when the party has been asked to explain the capitalist globalization occurring, the party has returned to the writings of Karl Marx. Despite admitting that globalization developed through the capitalist system, the party's leaders and theorists argue that globalization is not intrinsically capitalist. The reason being that if globalization was purely capitalist, it would exclude an alternative socialist form of modernity. Globalization, as with the market economy, therefore does not have one specific class character (neither socialist nor capitalist) according to the party. The insistence that globalization is not fixed in nature comes from Deng's insistence that China can pursue socialist modernization by incorporating elements of capitalism. Because of this there is considerable optimism within the CCP that despite the current capitalist dominance of globalization, globalization can be turned into a vehicle supporting socialism.",
"title": "Ideology"
},
{
"paragraph_id": 35,
"text": "While foreign analysts generally agree that the CCP has rejected orthodox Marxism–Leninism and Mao Zedong Thought (or at least basic thoughts within orthodox thinking), the CCP itself disagrees. Critics of the CCP argue that Jiang Zemin ended the party's formal commitment to Marxism–Leninism with the introduction of the ideological theory, the Three Represents. However, party theorist Leng Rong disagrees, claiming that \"President Jiang rid the Party of the ideological obstacles to different kinds of ownership ... He did not give up Marxism or socialism. He strengthened the Party by providing a modern understanding of Marxism and socialism—which is why we talk about a 'socialist market economy' with Chinese characteristics.\" The attainment of true \"communism\" is still described as the CCP's and China's \"ultimate goal\". While the CCP claims that China is in the primary stage of socialism, party theorists argue that the current development stage \"looks a lot like capitalism\". Alternatively, certain party theorists argue that \"capitalism is the early or first stage of communism.\" Some have dismissed the concept of a primary stage of socialism as intellectual cynicism. For example, Robert Lawrence Kuhn, a former foreign adviser to the Chinese government, stated: \"When I first heard this rationale, I thought it more comic than clever—a wry caricature of hack propagandists leaked by intellectual cynics. But the 100-year horizon comes from serious political theorists.\"",
"title": "Ideology"
},
{
"paragraph_id": 36,
"text": "American political scientist and sinologist David Shambaugh argues that before the \"Practice Is the Sole Criterion for the Truth\" campaign, the relationship between ideology and decision making was a deductive one, meaning that policy-making was derived from ideological knowledge. However, under Deng's leadership this relationship was turned upside down, with decision making justifying ideology. Chinese policy-makers have described the Soviet Union's state ideology as \"rigid, unimaginative, ossified, and disconnected from reality\", believing that this was one of the reasons for the dissolution of the Soviet Union. Therefore, Shambaugh argues, Chinese policy-makers believe that their party ideology must be dynamic to safeguard the party's rule.",
"title": "Ideology"
},
{
"paragraph_id": 37,
"text": "British sinologist Kerry Brown argues that the CCP does not have an ideology, and that the party organization is pragmatic and interested only in what works. The party itself argues against this assertion. Hu Jintao stated in 2012 that the Western world is \"threatening to divide us\" and that \"the international culture of the West is strong while we are weak ... Ideological and cultural fields are our main targets\". As such, the CCP puts a great deal of effort into the party schools and into crafting its ideological message.",
"title": "Ideology"
},
{
"paragraph_id": 38,
"text": "Collective leadership, the idea that decisions will be taken through consensus, is the ideal in the CCP. The concept has its origins back to Lenin and the Russian Bolshevik Party. At the level of the central party leadership this means that, for instance, all members of the Politburo Standing Committee are of equal standing (each member having only one vote). A member of the Politburo Standing Committee often represents a sector; during Mao's reign, he controlled the People's Liberation Army, Kang Sheng, the security apparatus, and Zhou Enlai, the State Council and the Ministry of Foreign Affairs. This counts as informal power. Despite this, in a paradoxical relation, members of a body are ranked hierarchically (despite the fact that members are in theory equal to one another). Informally, the collective leadership is headed by a \"leadership core\"; that is, the paramount leader, the person who holds the offices of CCP general secretary, CMC chairman and PRC president. Before Jiang Zemin's tenure as paramount leader, the party core and collective leadership were indistinguishable. In practice, the core was not responsible to the collective leadership. However, by the time of Jiang, the party had begun propagating a responsibility system, referring to it in official pronouncements as the \"core of the collective leadership\".",
"title": "Governance"
},
{
"paragraph_id": 39,
"text": "\"[Democratic centralism] is centralized on the basis of democracy and democratic under centralized guidance. This is the only system that can give full expression to democracy with full powers vested in the people's congresses at all levels and, at the same time, guarantee centralized administration with the governments at each level ...\"",
"title": "Governance"
},
{
"paragraph_id": 40,
"text": "— Mao Zedong, from his speech entitled \"Our General Programme\"",
"title": "Governance"
},
{
"paragraph_id": 41,
"text": "The CCP's organizational principle is democratic centralism, a principle that entails open discussion of policy on the condition of unity among party members in upholding the agreed-upon decision. It is based on two principles: democracy (synonymous in official discourse with \"socialist democracy\" and \"inner-party democracy\") and centralism. This has been the guiding organizational principle of the party since the 5th National Congress, held in 1927. In the words of the party constitution, \"The Party is an integral body organized under its program and constitution and on the basis of democratic centralism\". Mao once quipped that democratic centralism was \"at once democratic and centralized, with the two seeming opposites of democracy and centralization united in a definite form.\" Mao claimed that the superiority of democratic centralism lay in its internal contradictions, between democracy and centralism, and freedom and discipline. Currently, the CCP is claiming that \"democracy is the lifeline of the Party, the lifeline of socialism\". But for democracy to be implemented, and functioning properly, there needs to be centralization. The goal of democratic centralism was not to obliterate capitalism or its policies but instead it is the movement towards regulating capitalism while involving socialism and democracy. Democracy in any form, the CCP claims, needs centralism, since without centralism there will be no order.",
"title": "Governance"
},
{
"paragraph_id": 42,
"text": "Shuanggui is an intra-party disciplinary process conducted by the Central Commission for Discipline Inspection (CCDI), which conducts shuanggui on members accused of \"disciplinary violations\", a charge which generally refers to political corruption. The process, which literally translates to \"double regulation\", aims to extract confessions from members accused of violating party rules. According to the Dui Hua Foundation, tactics such as cigarette burns, beatings and simulated drowning are among those used to extract confessions. Other reported techniques include the use of induced hallucinations, with one subject of this method reporting that \"In the end I was so exhausted, I agreed to all the accusations against me even though they were false.\"",
"title": "Governance"
},
{
"paragraph_id": 43,
"text": "The CCP employs a political strategy that it terms \"united front work\" that involves groups and key individuals that are influenced or controlled by the CCP and used to advance its interests. United front work is managed primarily but not exclusively by the United Front Work Department (UFWD). The united front has historically been a popular front that has included eight legally-permitted political parties alongside other people's organizations which have nominal representation in the National People's Congress and the Chinese People's Political Consultative Conference (CPPCC). However, the CPPCC is a body without real power. While consultation does take place, it is supervised and directed by the CCP. Under Xi Jinping, the united front and its targets of influence have expanded in size and scope.",
"title": "Governance"
},
{
"paragraph_id": 44,
"text": "The National Congress is the party's highest body, and, since the 9th National Congress in 1969, has been convened every five years (prior to the 9th Congress they were convened on an irregular basis). According to the party's constitution, a congress may not be postponed except \"under extraordinary circumstances.\" The party constitution gives the National Congress six responsibilities:",
"title": "Organization"
},
{
"paragraph_id": 45,
"text": "In practice, the delegates rarely discuss issues at length at the National Congresses. Most substantive discussion takes place before the congress, in the preparation period, among a group of top party leaders. In between National Congresses, the Central Committee is the highest decision-making institution. The CCDI is responsible for supervising party's internal anti-corruption and ethics system. In between congresses the CCDI is under the authority of the Central Committee.",
"title": "Organization"
},
{
"paragraph_id": 46,
"text": "The Central Committee, as the party's highest decision-making institution between national congresses, elects several bodies to carry out its work. The first plenary session of a newly elected central committee elects the general secretary of the Central Committee, the party's leader; the Central Military Commission (CMC); the Politburo; the Politburo Standing Committee (PSC). The first plenum also endorses the composition of the Secretariat and the leadership of the CCDI. According to the party constitution, the general secretary must be a member of the Politburo Standing Committee (PSC), and is responsible for convening meetings of the PSC and the Politburo, while also presiding over the work of the Secretariat. The Politburo \"exercises the functions and powers of the Central Committee when a plenum is not in session\". The PSC is the party's highest decision-making institution when the Politburo, the Central Committee and the National Congress are not in session. It convenes at least once a week. It was established at the 8th National Congress, in 1958, to take over the policy-making role formerly assumed by the Secretariat. The Secretariat is the top implementation body of the Central Committee, and can make decisions within the policy framework established by the Politburo; it is also responsible for supervising the work of organizations that report directly into the Central Committee, for example departments, commissions, publications, and so on. The CMC is the highest decision-making institution on military affairs within the party, and controls the operations of the People's Liberation Army. The general secretary has, since Jiang Zemin, also served as Chairman of the CMC. Unlike the collective leadership ideal of other party organs, the CMC chairman acts as commander-in-chief with full authority to appoint or dismiss top military officers at will.",
"title": "Organization"
},
{
"paragraph_id": 47,
"text": "A first plenum of the Central Committee also elects heads of departments, bureaus, central leading groups and other institutions to pursue its work during a term (a \"term\" being the period elapsing between national congresses, usually five years). The General Office is the party's \"nerve centre\", in charge of day-to-day administrative work, including communications, protocol, and setting agendas for meetings. The CCP currently has six main central departments: the Organization Department, responsible for overseeing provincial appointments and vetting cadres for future appointments, the Publicity Department (formerly \"Propaganda Department\"), which oversees the media and formulates the party line to the media, the United Front Work Department, which oversees the country's eight minor parties, people's organizations, and influence groups inside and outside of the country, the International Liaison Department, functioning as the party's \"foreign affairs ministry\" with other parties, the Social Work Department, which handles work related to civic groups, chambers of commerce and industry groups and mixed-ownership and non-public enterprises, and the Central Political and Legal Affairs Commission, which oversees the country's legal enforcement authorities. The CC also has direct control over the Central Policy Research Office, which is responsible for researching issues of significant interest to the party leadership, the Central Party School, which provides political training and ideological indoctrination in communist thought for high-ranking and rising cadres, the Institution for Party History and Literature Research, which sets priorities for scholarly research in state-run universities and the Central Party School and studies and translates the classical works of Marxism. The party's newspaper, the People's Daily, is under the direct control of the Central Committee and is published with the objectives \"to tell good stories about China and the (Party)\" and to promote its party leader. The theoretical magazines Seeking Truth from Facts and Study Times are published by the Central Party School. The China Media Group, which oversees China Central Television (CCTV), China National Radio (CNR) and China Radio International (CRI), is under the direct control of the Publicity Department. The various offices of the \"Central Leading Groups\", such as the Hong Kong and Macau Work Office, the Taiwan Affairs Office, and the Central Finance Office, also report to the central committee during a plenary session. Additionally, CCP has sole control over the People's Liberation Army (PLA) through its Central Military Commission.",
"title": "Organization"
},
{
"paragraph_id": 48,
"text": "After seizing political power, the CCP extended the dual party-state command system to all government institutions, social organizations, and economic entities. The State Council and the Supreme Court each has a party group, established since November 1949. Party committees permeate in every state administrative organ as well as the People's Consultation Conferences and mass organizations at all levels. According to scholar Rush Doshi, \"[t]he Party sits above the state, runs parallel to the state, and is enmeshed in every level of the state.\" Modelled after the Soviet Nomenklatura system, the party committee's organization department at each level has the power to recruit, train, monitor, appoint, and relocate these officials.",
"title": "Organization"
},
{
"paragraph_id": 49,
"text": "Party committees exist at the level of provinces, cities, counties, and neighborhoods. These committees play a key role in directing local policy by selecting local leaders and assigning critical tasks. The Party secretary at each level is more senior than that of the leader of the government, with the CCP standing committee being the main source of power. Party committee members in each level are selected by the leadership in the level above, with provincial leaders selected by the central Organizational Department, and not removable by the local party secretary. Neighborhood committees are generally composed of older volunteers.",
"title": "Organization"
},
{
"paragraph_id": 50,
"text": "CCP committees exist inside of companies, both private and state-owned. A business that has more than three party members is legally required to establish a committee or branch. As of 2021, more than half of China's private firms have such organizations. These branches provide places for new member socialization and host morale boosting events for existing members. They also provide mechanisms that help private firm interface with government bodies and learn about policies which relate to their fields. On average, the profitability of private firms with a CCP branch is 12.6 percent higher than the profitability of private firms.",
"title": "Organization"
},
{
"paragraph_id": 51,
"text": "Within state-owned enterprises, these branches are governing bodies that make important decisions and inculcate CCP ideology in employees. Party committees or branches within companies also provide various benefits to employees. These may include bonuses, interest-free loans, mentorship programs, and free medical and other services for those in need. Enterprises that have party branches generally provide more expansive benefits for employees in the areas of retirement, medical care, unemployment, injury, and birth and fertility. Increasingly, the CCP is requiring private companies to revise their charters to include the role of the party.",
"title": "Organization"
},
{
"paragraph_id": 52,
"text": "The funding of all CCP organizations mainly comes from state fiscal revenue. Data for the proportion of total CCP organizations’ expenditures in total China fiscal revenue is unavailable.",
"title": "Organization"
},
{
"paragraph_id": 53,
"text": "\"It is my will to join the Communist Party of China, uphold the Party's program, observe the provisions of the Party constitution, fulfill a Party member's duties, carry out the Party's decisions, strictly observe Party discipline, guard Party secrets, be loyal to the Party, work hard, fight for communism throughout my life, be ready at all times to sacrifice my all for the Party and the people, and never betray the Party.\"",
"title": "Organization"
},
{
"paragraph_id": 54,
"text": "— Chinese Communist Party Admission Oath",
"title": "Organization"
},
{
"paragraph_id": 55,
"text": "The CCP reached 98.04 million members at the end of 2022, a net increase of 1.3 million over the previous year. It is the second largest political party in the world after India's Bharatiya Janata Party.",
"title": "Organization"
},
{
"paragraph_id": 56,
"text": "To join the CCP, an applicant must go through an approval process. Adults can file applications for membership with their local party branch. A prescreening process, akin to a background check, follows. Next, established party members at the local branch vet applicants' behavior and political attitudes and may make a formal inquiry to a party branch near the applicants' parents residence to vet family loyalty to communism and the party. In 2014, only 2 million applications were accepted out of some 22 million applicants. Admitted members then spend a year as a probationary member. Probationary members are typically accepted into the party.",
"title": "Organization"
},
{
"paragraph_id": 57,
"text": "In contrast to the past, when emphasis was placed on the applicants' ideological criteria, the current CCP stresses technical and educational qualifications. To become a probationary member, the applicant must take an admission oath before the party flag. The relevant CCP organization is responsible for observing and educating probationary members. Probationary members have duties similar to those of full members, with the exception that they may not vote in party elections nor stand for election. Many join the CCP through the Communist Youth League. Under Jiang Zemin, private entrepreneurs were allowed to become party members.",
"title": "Organization"
},
{
"paragraph_id": 58,
"text": "As of December 2022, individuals who identify as farmers, herdsmen and fishermen make up 26 million members; members identifying as workers totalled 6.7 million. Another group, the \"Managing, professional and technical staff in enterprises and public institutions\", made up 15.9 million, 11.3 million identified as working in administrative staff and 7.8 million described themselves as party cadres. By 2022, CCP membership had become more educated, younger, and less blue-collar than previously, with 54.7% of party members having a college degree or above. As of 2022, around 30 to 35 percent of Chinese entrepreneurs are or have been a party member. At the end of 2022, the CCP stated that it has approximately 7.46 million ethnic minority members or 7.6% of the party.",
"title": "Organization"
},
{
"paragraph_id": 59,
"text": "As of 2023, 29.30 million women are CCP members, representing 29.9% of the party. Women in China have low participation rates as political leaders. Women's disadvantage is most evident in their severe under representation in the more powerful political positions. At the top level of decision making, no woman has ever been among the members of the Politburo Standing Committee, while the broader Politburo currently does not have any female members. Just 3 of 27 government ministers are women, and importantly, since 1997, China has fallen to 53rd place from 16th in the world in terms of female representation in the National People's Congress, according to the Inter-Parliamentary Union. CCP leaders such as Zhao Ziyang have vigorously opposed the participation of women in the political process. Within the party women face a glass ceiling.",
"title": "Organization"
},
{
"paragraph_id": 60,
"text": "A 2019 Binghamton University study found that CCP members gain a 20% wage premium in the market over non-members. A subsequent academic study found that the economic benefit of CCP membership is strongest on those in lower wealth brackets.",
"title": "Organization"
},
{
"paragraph_id": 61,
"text": "The Communist Youth League (CYL) is the CCP's youth wing, and the largest mass organization for youth in China. To join, an applicant has to be between the ages of 14 and 28. It controls and supervises Young Pioneers, a youth organization for children below the age of 14. The organizational structure of CYL is an exact copy of the CCP's; the highest body is the National Congress, followed by the Central Committee [zh], Politburo and the Politburo Standing Committee. However, the Central Committee (and all central organs) of the CYL work under the guidance of the CCP central leadership. 2021 estimates put the number of CYL members at over 81 million.",
"title": "Organization"
},
{
"paragraph_id": 62,
"text": "At the beginning of its history, the CCP did not have a single official standard for the flag, but instead allowed individual party committees to copy the flag of the Communist Party of the Soviet Union. The Central Politburo decreed the establishment of a sole official flag on 28 April 1942: \"The flag of the CPC has the length-to-width proportion of 3:2 with a hammer and sickle in the upper-left corner, and with no five-pointed star. The Political Bureau authorizes the General Office to custom-make a number of standard flags and distribute them to all major organs\".",
"title": "Symbols"
},
{
"paragraph_id": 63,
"text": "According to People's Daily, \"The red color symbolizes revolution; the hammer-and-sickle are tools of workers and peasants, meaning that the Communist Party of China represents the interests of the masses and the people; the yellow color signifies brightness.\"",
"title": "Symbols"
},
{
"paragraph_id": 64,
"text": "The International Liaison Department of the CCP is responsible for dialogue with global political parties.",
"title": "Party-to-party relations"
},
{
"paragraph_id": 65,
"text": "The CCP continues to have relations with non-ruling communist and workers' parties and attends international communist conferences, most notably the International Meeting of Communist and Workers' Parties. While the CCP retains contact with major parties such as the Communist Party of Portugal, the Communist Party of France, the Communist Party of the Russian Federation, the Communist Party of Bohemia and Moravia, the Communist Party of Brazil, the Communist Party of Greece, the Communist Party of Nepal and the Communist Party of Spain, the party also retains relations with minor communist and workers' parties, such as the Communist Party of Australia, the Workers Party of Bangladesh, the Communist Party of Bangladesh (Marxist–Leninist) (Barua), the Communist Party of Sri Lanka, the Workers' Party of Belgium, the Hungarian Workers' Party, the Dominican Workers' Party, the Nepal Workers Peasants Party, and the Party for the Transformation of Honduras, for instance. In recent years, noting the self-reform of the European social democratic movement in the 1980s and 1990s, the CCP \"has noted the increased marginalization of West European communist parties.\"",
"title": "Party-to-party relations"
},
{
"paragraph_id": 66,
"text": "The CCP has retained close relations with the ruling parties of socialist states still espousing communism: Cuba, Laos, North Korea, and Vietnam. It spends a fair amount of time analysing the situation in the remaining socialist states, trying to reach conclusions as to why these states survived when so many did not, following the collapse of the Eastern European socialist states in 1989 and the dissolution of the Soviet Union in 1991. In general, the analyses of the remaining socialist states and their chances of survival have been positive, and the CCP believes that the socialist movement will be revitalized sometime in the future.",
"title": "Party-to-party relations"
},
{
"paragraph_id": 67,
"text": "The ruling party which the CCP is most interested in is the Communist Party of Vietnam (CPV). In general the CPV is considered a model example of socialist development in the post-Soviet era. Chinese analysts on Vietnam believe that the introduction of the Đổi Mới reform policy at the 6th CPV National Congress is the key reason for Vietnam's current success.",
"title": "Party-to-party relations"
},
{
"paragraph_id": 68,
"text": "While the CCP is probably the organization with most access to North Korea, writing about North Korea is tightly circumscribed. The few reports accessible to the general public are those about North Korean economic reforms. While Chinese analysts of North Korea tend to speak positively of North Korea in public, in official discussions c. 2008 they show much disdain for North Korea's economic system, the cult of personality which pervades society, the Kim family, the idea of hereditary succession in a socialist state, the security state, the use of scarce resources on the Korean People's Army and the general impoverishment of the North Korean people. Circa 2008, there are those analysts who compare the current situation of North Korea with that of China during the Cultural Revolution. Over the years, the CCP has tried to persuade the Workers' Party of Korea (or WPK, North Korea's ruling party) to introduce economic reforms by showing them key economic infrastructure in China. For instance, in 2006 the CCP invited then-WPK general secretary Kim Jong Il to Guangdong to showcase the success economic reforms had brought China. In general, the CCP considers the WPK and North Korea to be negative examples of a ruling communist party and socialist state.",
"title": "Party-to-party relations"
},
{
"paragraph_id": 69,
"text": "There is a considerable degree of interest in Cuba within the CCP. Fidel Castro, the former First Secretary of the Communist Party of Cuba (PCC), is greatly admired, and books have been written focusing on the successes of the Cuban Revolution. Communication between the CCP and the PCC has increased since the 1990s. At the 4th Plenary Session of the 16th Central Committee, which discussed the possibility of the CCP learning from other ruling parties, praise was heaped on the PCC. When Wu Guanzheng, a Central Politburo member, met with Fidel Castro in 2007, he gave him a personal letter written by Hu Jintao: \"Facts have shown that China and Cuba are trustworthy good friends, good comrades, and good brothers who treat each other with sincerity. The two countries' friendship has withstood the test of a changeable international situation, and the friendship has been further strengthened and consolidated.\"",
"title": "Party-to-party relations"
},
{
"paragraph_id": 70,
"text": "Since the decline and fall of communism in Eastern Europe, the CCP has begun establishing party-to-party relations with non-communist parties. These relations are sought so that the CCP can learn from them. For instance, the CCP has been eager to understand how the People's Action Party of Singapore (PAP) maintains its total domination over Singaporean politics through its \"low-key presence, but total control.\" According to the CCP's own analysis of Singapore, the PAP's dominance can be explained by its \"well-developed social network, which controls constituencies effectively by extending its tentacles deeply into society through branches of government and party-controlled groups.\" While the CCP accepts that Singapore is a liberal democracy, they view it as a guided democracy led by the PAP. Other differences are, according to the CCP, \"that it is not a political party based on the working class—instead it is a political party of the elite. ... It is also a political party of the parliamentary system, not a revolutionary party.\" Other parties which the CCP studies and maintains strong party-to-party relations with are the United Malays National Organization, which has ruled Malaysia (1957–2018, 2020–2022), and the Liberal Democratic Party in Japan, which dominated Japanese politics since 1955.",
"title": "Party-to-party relations"
},
{
"paragraph_id": 71,
"text": "Since Jiang Zemin's time, the CCP has made friendly overtures to its erstwhile foe, the Kuomintang. The CCP emphasizes strong party-to-party relations with the KMT so as to strengthen the probability of the reunification of Taiwan with mainland China. However, several studies have been written on the KMT's loss of power in 2000 after having ruled Taiwan since 1949 (the KMT officially ruled mainland China from 1928 to 1949). In general, one-party states or dominant-party states are of special interest to the party and party-to-party relations are formed so that the CCP can study them. The longevity of the Syrian Regional Branch of the Arab Socialist Ba'ath Party is attributed to the personalization of power in the al-Assad family, the strong presidential system, the inheritance of power, which passed from Hafez al-Assad to his son Bashar al-Assad, and the role given to the Syrian military in politics.",
"title": "Party-to-party relations"
},
{
"paragraph_id": 72,
"text": "Circa 2008, the CCP has been especially interested in Latin America, as shown by the increasing number of delegates sent to and received from these countries. Of special fascination for the CCP is the 71-year-long rule of the Institutional Revolutionary Party (PRI) in Mexico. While the CCP attributed the PRI's long reign in power to the strong presidential system, tapping into the machismo culture of the country, its nationalist posture, its close identification with the rural populace and the implementation of nationalization alongside the marketization of the economy, the CCP concluded that the PRI failed because of the lack of inner-party democracy, its pursuit of social democracy, its rigid party structures that could not be reformed, its political corruption, the pressure of globalization, and American interference in Mexican politics. While the CCP was slow to recognize the pink tide in Latin America, it has strengthened party-to-party relations with several socialist and anti-American political parties over the years. The CCP has occasionally expressed some irritation over Hugo Chávez's anti-capitalist and anti-American rhetoric. Despite this, the CCP reached an agreement in 2013 with the United Socialist Party of Venezuela (PSUV), which was founded by Chávez, for the CCP to educate PSUV cadres in political and social fields. By 2008, the CCP claimed to have established relations with 99 political parties in 29 Latin American countries.",
"title": "Party-to-party relations"
},
{
"paragraph_id": 73,
"text": "Social democratic movements in Europe have been of great interest to the CCP since the early 1980s. With the exception of a short period in which the CCP forged party-to-party relations with far-right parties during the 1970s in an effort to halt \"Soviet expansionism\", the CCP's relations with European social democratic parties were its first serious efforts to establish cordial party-to-party relations with non-communist parties. The CCP credits the European social democrats with creating a \"capitalism with a human face\". Before the 1980s, the CCP had a highly negative and dismissive view of social democracy, a view dating back to the Second International and the Marxist–Leninist view on the social democratic movement. By the 1980s, that view had changed and the CCP concluded that it could actually learn something from the social democratic movement. CCP delegates were sent all over Europe to observe. By the 1980s, most European social democratic parties were facing electoral decline and in a period of self-reform. The CCP followed this with great interest, laying most weight on reform efforts within the British Labour Party and the Social Democratic Party of Germany. The CCP concluded that both parties were re-elected because they modernized, replacing traditional state socialist tenets with new ones supporting privatization, shedding the belief in big government, conceiving a new view of the welfare state, changing their negative views of the market and moving from their traditional support base of trade unions to entrepreneurs, the young and students.",
"title": "Party-to-party relations"
}
] | The Chinese Communist Party (CCP), officially the Communist Party of China (CPC), is the founding and sole ruling party of the People's Republic of China (PRC). Under the leadership of Mao Zedong, the CCP emerged victorious in the Chinese Civil War against the Kuomintang. In 1949, Mao proclaimed the establishment of the People's Republic of China. Since then, the CCP has governed China and has had sole control over the People's Liberation Army (PLA). Successive leaders of the CCP have added their own theories to the party's constitution, which outlines the party's ideology, collectively referred to as socialism with Chinese characteristics. As of 2023, the CCP has more than 98 million members, making it the second largest political party by membership in the world after India's Bharatiya Janata Party. In 1921, Chen Duxiu and Li Dazhao led the founding of the CCP with the help of the Far Eastern Bureau of the Communist Party of the Soviet Union and Far Eastern Bureau of the Communist International. For the first six years of its history, the CCP aligned itself with the Kuomintang (KMT) as the organized left wing of the larger nationalist movement. However, when the right wing of the KMT, led by Chiang Kai-shek, turned on the CCP and massacred tens of thousands of the party's members, the two parties split and began a prolonged civil war. During the next ten years of guerrilla warfare, Mao Zedong rose to become the most influential figure in the CCP, and the party established a strong base among the rural peasantry with its land reform policies. Support for the CCP continued to grow throughout the Second Sino-Japanese War, and after the Japanese surrender in 1945, the CCP emerged triumphant in the communist revolution against the Nationalist government. After the KMT's retreat to Taiwan, the CCP established the People's Republic of China on 1 October 1949. Mao Zedong continued to be the most influential member of the CCP until his death in 1976, although he periodically withdrew from public leadership as his health deteriorated. Under Mao, the party completed its land reform program, launched a series of five-year plans, and eventually split with the Soviet Union. Although Mao attempted to purge the party of capitalist and reactionary elements during the Cultural Revolution, after his death, these policies were only briefly continued by the Gang of Four before a less radical faction seized control. During the 1980s, Deng Xiaoping directed the CCP away from Maoist orthodoxy and towards a policy of economic liberalization. The official explanation for these reforms was that China was still in the primary stage of socialism, a developmental stage similar to the capitalist mode of production. Since the collapse of the Eastern Bloc and the dissolution of the Soviet Union in 1991, the CCP has focused on maintaining its relations with the ruling parties of the remaining socialist states and continues to participate in the International Meeting of Communist and Workers' Parties each year. The CCP has also established relations with several non-communist parties, including dominant nationalist parties of many developing countries in Africa, Asia and Latin America, as well as social democratic parties in Europe. The Chinese Communist Party is organized based on democratic centralism, a principle that entails open policy discussion on the condition of unity among party members in upholding the agreed-upon decision. The highest body of the CCP is the National Congress, convened every fifth year. When the National Congress is not in session, the Central Committee is the highest body, but since that body usually only meets once a year, most duties and responsibilities are vested in the Politburo and its Standing Committee. Members of the latter are seen as the top leadership of the party and the state. Today the party's leader holds the offices of general secretary, Chairman of the Central Military Commission (CMC), and State President. Because of these posts, the party leader is seen as the country's paramount leader. The current leader is Xi Jinping, who was elected at the 1st Plenary Session of the 18th Central Committee held on 15 November 2012 and has been reelected twice, on 25 October 2017 by the 19th Central Committee and on 10 October 2022 by the 20th Central Committee. | 2001-11-19T20:35:16Z | 2023-12-24T21:17:13Z | [
"Template:Better source needed",
"Template:Communist parties",
"Template:Update inline",
"Template:Pp-semi-indef",
"Template:Infobox political party",
"Template:TOC limit",
"Template:Cn",
"Template:Expand section",
"Template:Refend",
"Template:CCP Party Organs",
"Template:Use Oxford spelling",
"Template:Clear",
"Template:Official website",
"Template:Navboxes",
"Template:Use dmy dates",
"Template:NoteTag",
"Template:Circa",
"Template:Composition bar",
"Template:Commons and category inline",
"Template:Cite encyclopedia",
"Template:Cite web",
"Template:Library resources box",
"Template:Rp",
"Template:Ill",
"Template:Multiple image",
"Template:Increase",
"Template:Cite news",
"Template:Authority control",
"Template:Infobox Chinese",
"Template:Nbsp",
"Template:Wikiquote-inline",
"Template:Refbegin",
"Template:As of",
"Template:Main",
"Template:Sfn",
"Template:Quote box",
"Template:Cite book",
"Template:Further",
"Template:Reflist",
"Template:Cbignore",
"Template:Citation",
"Template:Chinese Communist Party",
"Template:NoteFoot",
"Template:Cite journal",
"Template:Short description",
"Template:Redirect",
"Template:Steady",
"Template:Decrease",
"Template:Portal"
] | https://en.wikipedia.org/wiki/Chinese_Communist_Party |
7,176 | Cryogenics | In physics, cryogenics is the production and behaviour of materials at very low temperatures.
The 13th IIR International Congress of Refrigeration (held in Washington DC in 1971) endorsed a universal definition of "cryogenics" and "cryogenic" by accepting a threshold of 120 K (or –153 °C) to distinguish these terms from the conventional refrigeration. This is a logical dividing line, since the normal boiling points of the so-called permanent gases (such as helium, hydrogen, neon, nitrogen, oxygen, and normal air) lie below 120 K, while the Freon refrigerants, hydrocarbons, and other common refrigerants have boiling points above 120 K. The U.S. National Institute of Standards and Technology considers the field of cryogenics as that involving temperatures below -153 °C (120 K; -243.4 Fahrenheit)
Discovery of superconducting materials with critical temperatures significantly above the boiling point of nitrogen has provided new interest in reliable, low cost methods of producing high temperature cryogenic refrigeration. The term "high temperature cryogenic" describes temperatures ranging from above the boiling point of liquid nitrogen, −195.79 °C (77.36 K; −320.42 °F), up to −50 °C (223 K; −58 °F). The discovery of superconductive properties is first attributed to Heike Kamerlingh Onnes on July 10, 1908. The discovery came after the ability to reach a temperature of 2 K. These first superconductive properties were observed in mercury at a temperature of 4.2 K.
Cryogenicists use the Kelvin or Rankine temperature scale, both of which measure from absolute zero, rather than more usual scales such as Celsius which measures from the freezing point of water at sea level or Fahrenheit which measures from the freezing point of a particular brine solution at sea level.
The word cryogenics stems from Greek κρύος (cryos) – "cold" + γενής (genis) – "generating".
Cryogenic fluids with their boiling point in Kelvin and degree Celsius.
Liquefied gases, such as liquid nitrogen and liquid helium, are used in many cryogenic applications. Liquid nitrogen is the most commonly used element in cryogenics and is legally purchasable around the world. Liquid helium is also commonly used and allows for the lowest attainable temperatures to be reached.
These liquids may be stored in Dewar flasks, which are double-walled containers with a high vacuum between the walls to reduce heat transfer into the liquid. Typical laboratory Dewar flasks are spherical, made of glass and protected in a metal outer container. Dewar flasks for extremely cold liquids such as liquid helium have another double-walled container filled with liquid nitrogen. Dewar flasks are named after their inventor, James Dewar, the man who first liquefied hydrogen. Thermos bottles are smaller vacuum flasks fitted in a protective casing.
Cryogenic barcode labels are used to mark Dewar flasks containing these liquids, and will not frost over down to −195 degrees Celsius.
Cryogenic transfer pumps are the pumps used on LNG piers to transfer liquefied natural gas from LNG carriers to LNG storage tanks, as are cryogenic valves.
The field of cryogenics advanced during World War II when scientists found that metals frozen to low temperatures showed more resistance to wear. Based on this theory of cryogenic hardening, the commercial cryogenic processing industry was founded in 1966 by Bill and Ed Busch. With a background in the heat treating industry, the Busch brothers founded a company in Detroit called CryoTech in 1966. Busch originally experimented with the possibility of increasing the life of metal tools to anywhere between 200% and 400% of the original life expectancy using cryogenic tempering instead of heat treating. This evolved in the late 1990s into the treatment of other parts.
Cryogens, such as liquid nitrogen, are further used for specialty chilling and freezing applications. Some chemical reactions, like those used to produce the active ingredients for the popular statin drugs, must occur at low temperatures of approximately −100 °C (−148 °F). Special cryogenic chemical reactors are used to remove reaction heat and provide a low temperature environment. The freezing of foods and biotechnology products, like vaccines, requires nitrogen in blast freezing or immersion freezing systems. Certain soft or elastic materials become hard and brittle at very low temperatures, which makes cryogenic milling (cryomilling) an option for some materials that cannot easily be milled at higher temperatures.
Cryogenic processing is not a substitute for heat treatment, but rather an extension of the heating–quenching–tempering cycle. Normally, when an item is quenched, the final temperature is ambient. The only reason for this is that most heat treaters do not have cooling equipment. There is nothing metallurgically significant about ambient temperature. The cryogenic process continues this action from ambient temperature down to −320 °F (140 °R; 78 K; −196 °C). In most instances the cryogenic cycle is followed by a heat tempering procedure. As all alloys do not have the same chemical constituents, the tempering procedure varies according to the material's chemical composition, thermal history and/or a tool's particular service application.
The entire process takes 3–4 days.
Another use of cryogenics is cryogenic fuels for rockets with liquid hydrogen as the most widely used example. Liquid oxygen (LOX) is even more widely used but as an oxidizer, not a fuel. NASA's workhorse Space Shuttle used cryogenic hydrogen/oxygen propellant as its primary means of getting into orbit. LOX is also widely used with RP-1 kerosene, a non-cryogenic hydrocarbon, such as in the rockets built for the Soviet space program by Sergei Korolev.
Russian aircraft manufacturer Tupolev developed a version of its popular design Tu-154 with a cryogenic fuel system, known as the Tu-155. The plane uses a fuel referred to as liquefied natural gas or LNG, and made its first flight in 1989.
Some applications of cryogenics:
Cryogenic cooling of devices and material is usually achieved via the use of liquid nitrogen, liquid helium, or a mechanical cryocooler (which uses high-pressure helium lines). Gifford-McMahon cryocoolers, pulse tube cryocoolers and Stirling cryocoolers are in wide use with selection based on required base temperature and cooling capacity. The most recent development in cryogenics is the use of magnets as regenerators as well as refrigerators. These devices work on the principle known as the magnetocaloric effect.
There are various cryogenic detectors which are used to detect particles.
For cryogenic temperature measurement down to 30 K, Pt100 sensors, a resistance temperature detector (RTD), are used. For temperatures lower than 30 K, it is necessary to use a silicon diode for accuracy. | [
{
"paragraph_id": 0,
"text": "In physics, cryogenics is the production and behaviour of materials at very low temperatures.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The 13th IIR International Congress of Refrigeration (held in Washington DC in 1971) endorsed a universal definition of \"cryogenics\" and \"cryogenic\" by accepting a threshold of 120 K (or –153 °C) to distinguish these terms from the conventional refrigeration. This is a logical dividing line, since the normal boiling points of the so-called permanent gases (such as helium, hydrogen, neon, nitrogen, oxygen, and normal air) lie below 120 K, while the Freon refrigerants, hydrocarbons, and other common refrigerants have boiling points above 120 K. The U.S. National Institute of Standards and Technology considers the field of cryogenics as that involving temperatures below -153 °C (120 K; -243.4 Fahrenheit)",
"title": ""
},
{
"paragraph_id": 2,
"text": "Discovery of superconducting materials with critical temperatures significantly above the boiling point of nitrogen has provided new interest in reliable, low cost methods of producing high temperature cryogenic refrigeration. The term \"high temperature cryogenic\" describes temperatures ranging from above the boiling point of liquid nitrogen, −195.79 °C (77.36 K; −320.42 °F), up to −50 °C (223 K; −58 °F). The discovery of superconductive properties is first attributed to Heike Kamerlingh Onnes on July 10, 1908. The discovery came after the ability to reach a temperature of 2 K. These first superconductive properties were observed in mercury at a temperature of 4.2 K.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Cryogenicists use the Kelvin or Rankine temperature scale, both of which measure from absolute zero, rather than more usual scales such as Celsius which measures from the freezing point of water at sea level or Fahrenheit which measures from the freezing point of a particular brine solution at sea level.",
"title": ""
},
{
"paragraph_id": 4,
"text": "The word cryogenics stems from Greek κρύος (cryos) – \"cold\" + γενής (genis) – \"generating\".",
"title": "Etymology"
},
{
"paragraph_id": 5,
"text": "Cryogenic fluids with their boiling point in Kelvin and degree Celsius.",
"title": "Cryogenic fluids"
},
{
"paragraph_id": 6,
"text": "Liquefied gases, such as liquid nitrogen and liquid helium, are used in many cryogenic applications. Liquid nitrogen is the most commonly used element in cryogenics and is legally purchasable around the world. Liquid helium is also commonly used and allows for the lowest attainable temperatures to be reached.",
"title": "Industrial applications"
},
{
"paragraph_id": 7,
"text": "These liquids may be stored in Dewar flasks, which are double-walled containers with a high vacuum between the walls to reduce heat transfer into the liquid. Typical laboratory Dewar flasks are spherical, made of glass and protected in a metal outer container. Dewar flasks for extremely cold liquids such as liquid helium have another double-walled container filled with liquid nitrogen. Dewar flasks are named after their inventor, James Dewar, the man who first liquefied hydrogen. Thermos bottles are smaller vacuum flasks fitted in a protective casing.",
"title": "Industrial applications"
},
{
"paragraph_id": 8,
"text": "Cryogenic barcode labels are used to mark Dewar flasks containing these liquids, and will not frost over down to −195 degrees Celsius.",
"title": "Industrial applications"
},
{
"paragraph_id": 9,
"text": "Cryogenic transfer pumps are the pumps used on LNG piers to transfer liquefied natural gas from LNG carriers to LNG storage tanks, as are cryogenic valves.",
"title": "Industrial applications"
},
{
"paragraph_id": 10,
"text": "The field of cryogenics advanced during World War II when scientists found that metals frozen to low temperatures showed more resistance to wear. Based on this theory of cryogenic hardening, the commercial cryogenic processing industry was founded in 1966 by Bill and Ed Busch. With a background in the heat treating industry, the Busch brothers founded a company in Detroit called CryoTech in 1966. Busch originally experimented with the possibility of increasing the life of metal tools to anywhere between 200% and 400% of the original life expectancy using cryogenic tempering instead of heat treating. This evolved in the late 1990s into the treatment of other parts.",
"title": "Industrial applications"
},
{
"paragraph_id": 11,
"text": "Cryogens, such as liquid nitrogen, are further used for specialty chilling and freezing applications. Some chemical reactions, like those used to produce the active ingredients for the popular statin drugs, must occur at low temperatures of approximately −100 °C (−148 °F). Special cryogenic chemical reactors are used to remove reaction heat and provide a low temperature environment. The freezing of foods and biotechnology products, like vaccines, requires nitrogen in blast freezing or immersion freezing systems. Certain soft or elastic materials become hard and brittle at very low temperatures, which makes cryogenic milling (cryomilling) an option for some materials that cannot easily be milled at higher temperatures.",
"title": "Industrial applications"
},
{
"paragraph_id": 12,
"text": "Cryogenic processing is not a substitute for heat treatment, but rather an extension of the heating–quenching–tempering cycle. Normally, when an item is quenched, the final temperature is ambient. The only reason for this is that most heat treaters do not have cooling equipment. There is nothing metallurgically significant about ambient temperature. The cryogenic process continues this action from ambient temperature down to −320 °F (140 °R; 78 K; −196 °C). In most instances the cryogenic cycle is followed by a heat tempering procedure. As all alloys do not have the same chemical constituents, the tempering procedure varies according to the material's chemical composition, thermal history and/or a tool's particular service application.",
"title": "Industrial applications"
},
{
"paragraph_id": 13,
"text": "The entire process takes 3–4 days.",
"title": "Industrial applications"
},
{
"paragraph_id": 14,
"text": "Another use of cryogenics is cryogenic fuels for rockets with liquid hydrogen as the most widely used example. Liquid oxygen (LOX) is even more widely used but as an oxidizer, not a fuel. NASA's workhorse Space Shuttle used cryogenic hydrogen/oxygen propellant as its primary means of getting into orbit. LOX is also widely used with RP-1 kerosene, a non-cryogenic hydrocarbon, such as in the rockets built for the Soviet space program by Sergei Korolev.",
"title": "Industrial applications"
},
{
"paragraph_id": 15,
"text": "Russian aircraft manufacturer Tupolev developed a version of its popular design Tu-154 with a cryogenic fuel system, known as the Tu-155. The plane uses a fuel referred to as liquefied natural gas or LNG, and made its first flight in 1989.",
"title": "Industrial applications"
},
{
"paragraph_id": 16,
"text": "Some applications of cryogenics:",
"title": "Other applications"
},
{
"paragraph_id": 17,
"text": "Cryogenic cooling of devices and material is usually achieved via the use of liquid nitrogen, liquid helium, or a mechanical cryocooler (which uses high-pressure helium lines). Gifford-McMahon cryocoolers, pulse tube cryocoolers and Stirling cryocoolers are in wide use with selection based on required base temperature and cooling capacity. The most recent development in cryogenics is the use of magnets as regenerators as well as refrigerators. These devices work on the principle known as the magnetocaloric effect.",
"title": "Production"
},
{
"paragraph_id": 18,
"text": "There are various cryogenic detectors which are used to detect particles.",
"title": "Detectors"
},
{
"paragraph_id": 19,
"text": "For cryogenic temperature measurement down to 30 K, Pt100 sensors, a resistance temperature detector (RTD), are used. For temperatures lower than 30 K, it is necessary to use a silicon diode for accuracy.",
"title": "Detectors"
}
] | In physics, cryogenics is the production and behaviour of materials at very low temperatures. The 13th IIR International Congress of Refrigeration endorsed a universal definition of "cryogenics" and "cryogenic" by accepting a threshold of 120 K to distinguish these terms from the conventional refrigeration. This is a logical dividing line, since the normal boiling points of the so-called permanent gases lie below 120 K, while the Freon refrigerants, hydrocarbons, and other common refrigerants have boiling points above 120 K. The U.S. National Institute of Standards and Technology considers the field of cryogenics as that involving temperatures below -153 °C Discovery of superconducting materials with critical temperatures significantly above the boiling point of nitrogen has provided new interest in reliable, low cost methods of producing high temperature cryogenic refrigeration. The term "high temperature cryogenic" describes temperatures ranging from above the boiling point of liquid nitrogen, −195.79 °C, up to −50 °C. The discovery of superconductive properties is first attributed to Heike Kamerlingh Onnes on July 10, 1908. The discovery came after the ability to reach a temperature of 2 K. These first superconductive properties were observed in mercury at a temperature of 4.2 K. Cryogenicists use the Kelvin or Rankine temperature scale, both of which measure from absolute zero, rather than more usual scales such as Celsius which measures from the freezing point of water at sea level or Fahrenheit which measures from the freezing point of a particular brine solution at sea level. | 2001-11-19T21:18:06Z | 2023-12-14T18:50:23Z | [
"Template:Unreferenced section",
"Template:Webarchive",
"Template:Further",
"Template:Citation needed",
"Template:Citation",
"Template:Cite book",
"Template:ISBN",
"Template:Redirect",
"Template:More citations needed section",
"Template:Multiple image",
"Template:Reflist",
"Template:Cite web",
"Template:Cite journal",
"Template:Authority control",
"Template:For multi",
"Template:Convert",
"Template:Short description"
] | https://en.wikipedia.org/wiki/Cryogenics |
7,179 | Cary Elwes | Ivan Simon Cary Elwes (/ˈɛlwɪs/; born 26 October 1962) is an English actor. He is known for his leading film roles as Westley in The Princess Bride (1987), Robin Hood in Robin Hood: Men in Tights (1993), and Dr. Lawrence Gordon in the Saw film series.
Elwes' other performances in films include Glory (1989), Hot Shots! (1991), Days of Thunder (1990), Bram Stoker's Dracula (1992), Twister (1996), Kiss the Girls (1997), Liar Liar (1997), Shadow of the Vampire (2000), The Cat's Meow (2001), Ella Enchanted (2004), No Strings Attached (2011), BlackBerry (2023), and Mission: Impossible – Dead Reckoning Part One (2023).
He has appeared on television in a number of series including The X-Files, Seinfeld, From the Earth to the Moon, Psych, and Life in Pieces. In 2019, he appeared in the Netflix drama series Stranger Things and the Amazon Prime comedy series The Marvelous Mrs. Maisel. Elwes has written a memoir of his time working on The Princess Bride called As You Wish, which was published in 2014.
Ivan Simon Cary Elwes was born on 26 October 1962 in Westminster, London. He is the youngest of three sons of portrait painter Dominic Elwes and interior designer and socialite Tessa Kennedy. Elwes is the brother of artist Damian Elwes and film producers Cassian Elwes and Milica Kastner. His stepfather, Elliott Kastner, was an American film producer and the first American to set up independent film production in the United Kingdom. His paternal grandfather was the portrait painter Simon Elwes, whose own father was the diplomat and tenor Gervase Elwes (1866–1921). Elwes has English, Irish, Scottish, Croatian-Jewish, and Serbian ancestry, the latter two from his maternal grandmother, Daška McLean, whose second husband, Billy McLean, was an operative for Special Operations Executive during World War II.
One of Elwes's relatives is the British miser John Elwes, who was the inspiration for Ebenezer Scrooge in A Christmas Carol (1843), having been referenced by Charles Dickens himself in chapter six of his last completed novel, Our Mutual Friend. Elwes himself played five roles in the 2009 film adaptation of A Christmas Carol. Through his maternal grandfather, Elwes is also related to Sir Alexander William "Blackie" Kennedy, one of the first photographers to document the archaeological site of Petra following the collapse of the Ottoman Empire.
Elwes was brought up as a Catholic and was an altar boy at Westminster Cathedral. His paternal relatives include such clerics as Dudley Charles Cary-Elwes (1868–1932), the Bishop of Northampton and Abbot Columba Cary-Elwes (Ampleforth Abbey, Saint Louis Abbey). He discussed this in an interview while he was filming the 2005 CBS television film Pope John Paul II, in which he played the young priest Karol Wojtyła.
Elwes's parents divorced when he was four years old. In 1975, when Elwes was 13, his father committed suicide. He was educated at Harrow School, and the London Academy of Music and Dramatic Art. In 1981, he moved to the United States to study acting at Sarah Lawrence College in Bronxville, New York. While living there, Elwes studied acting at both the Actors Studio and the Lee Strasberg Theatre and Film Institute under the tutelage of Al Pacino's mentor, Charlie Laughton (not to be confused with English actor Charles Laughton). As a teenager, he also worked as a production assistant on the films Absolution, Octopussy, and Superman, where he was assigned to Marlon Brando. When Elwes introduced himself to the actor, Brando insisted on calling him "Rocky" after Rocky Marciano.
Elwes made his acting debut in 1984 with Marek Kanievska's film Another Country, which was loosely based on the English boarding school exploits of British spies Burgess, Philby and MacLean. He played James Harcourt, a gay student. He went on to play Guilford Dudley in the British historical drama film Lady Jane, opposite Helena Bonham Carter. He was then cast as stable-boy-turned-swashbuckler Westley in Rob Reiner's fantasy-comedy The Princess Bride (1987), which was based on the novel of the same name by William Goldman. It was a modest box office success, but received critical acclaim. As a result of years of reviews, it earned a score of 97% on the review aggregation website Rotten Tomatoes. Since being released on home video and television, the film has become a cult classic.
Initially the studio didn't know how to market it. Was it an adventure? A fantasy? A comedy? A romance? A kids' movie? In the end they sold it as a kids' movie and it largely had to rely on word of mouth ... people tell me they still have their VHS copy that has been passed down from one generation to the next.
Elwes continued to work steadily, varying between dramatic roles, such as in the Oscar-winning Glory (1989), and comedic roles, as in Hot Shots! (1991). He played a rival driver to Tom Cruise in Days of Thunder (1990). In 1993, he starred as Robin Hood in Mel Brooks's comedy Robin Hood: Men in Tights. Elwes then appeared in supporting roles in such films as Francis Ford Coppola's adaptation of Bram Stoker's Dracula (1992), The Crush (1993), The Jungle Book (1994), Twister (1996), Liar Liar (1997), and Kiss the Girls. In 1999, he portrayed famed theatre and film producer John Houseman for Tim Robbins in his ensemble film based on Orson Welles's musical, Cradle Will Rock. Following that, he travelled to Luxembourg to work with John Malkovich and Willem Dafoe in Shadow of the Vampire.
Elwes made his first television appearance in 1996 as David Lookner on Seinfeld. Two years later he played astronaut Michael Collins in the Golden Globe Award-winning HBO miniseries From the Earth To the Moon. The following year Elwes was nominated for a Golden Satellite Award for Best Performance by an Actor in a Mini-Series or Motion Picture Made for Television for his portrayal of Colonel James Burton in The Pentagon Wars directed by Richard Benjamin. In 1999, he guest starred as Dr. John York in an episode of the television series The Outer Limits.
In 2001, he co-starred in Peter Bogdanovich's ensemble film The Cat's Meow portraying film mogul Thomas Ince, who died mysteriously while vacationing with William Randolph Hearst on his yacht. Shortly afterward he received another Golden Satellite Award nomination for his work on the ensemble NBC Television film Uprising opposite Jon Voight directed by Jon Avnet. Elwes had a recurring role in the final season (from 2001 to 2002) of Chris Carter's hit series The X-Files as FBI Assistant Director Brad Follmer. In 2003 Elwes portrayed Kerry Max Cook in the off-Broadway play The Exonerated in New York, directed by Bob Balaban (18–23 March 2003).
In 2004, Elwes starred in the horror–thriller Saw which, at a budget of a little over $1 million, grossed over $100 million worldwide. The same year he appeared in Ella Enchanted, this time as the villain, not the hero. Also in 2004, he portrayed serial killer Ted Bundy in the A&E Network film The Riverman, which became one of the highest rated original films in the network's history and garnered a prestigious BANFF Rockie Award nomination. The following year, Elwes played the young Karol Wojtyła in the CBS television film Pope John Paul II. The TV film was highly successful not only in North America but also in Europe, where it broke box office records in the late Pope's native Poland and became the first film ever to break $1 million in three days. He made an uncredited appearance as Sam Green, the man who introduced Andy Warhol to Edie Sedgwick, in the 2006 film Factory Girl. In 2007, he appeared in Garry Marshall's Georgia Rule opposite Jane Fonda.
In 2007, he made a guest appearance on the Law & Order: Special Victims Unit episode "Dependent" as a Mafia lawyer. In 2009, he played the role of Pierre Despereaux, an international art thief, in the fourth-season premiere of Psych. Also in 2009 Elwes joined the cast of Robert Zemeckis's motion capture adaptation of Charles Dickens' A Christmas Carol portraying five roles. That same year he was chosen by Steven Spielberg to appear in his motion capture adaptation of Belgian artist Hergé's popular comic strip The Adventures of Tintin: The Secret of the Unicorn. Elwes's voice-over work includes the narrator in James Patterson's audiobook The Jester, as well as characters in film and television animations such as Quest for Camelot, Pinky and The Brain, Batman Beyond, and the English versions of the Studio Ghibli films, Porco Rosso, Whisper of the Heart and The Cat Returns. For the 2004 video game The Bard's Tale, he served as screenwriter, improviser, and voice actor of the main character The Bard. In 2009, Elwes reunited with Jason Alexander for the Indian film, Delhi Safari. The following year Elwes portrayed the part of Gremlin Gus in Disney's video game, Epic Mickey 2: The Power of Two. In 2014, he appeared in Cosmos: A Spacetime Odyssey as the voice of scientists Edmond Halley and Robert Hooke.
In 2010, he returned to the Saw franchise in Saw 3D (2010), the seventh film in the series, as Dr. Lawrence Gordon. In 2010, he returned to Psych, reprising his role in the second half of the fifth season, again in the show's sixth season, and again in the show's eighth season premiere. In 2014, Elwes played Hugh Ashmeade, Director of the CIA, in the second season of the BYUtv series Granite Flats. In 2011, he was selected by Ivan Reitman to star alongside Natalie Portman in No Strings Attached. That same year, Elwes and Garry Marshall teamed up again in the ensemble romantic comedy New Year's Eve opposite Robert de Niro and Halle Berry.
In 2012, Elwes starred in the independent drama The Citizen. and the following year Elwes joined Selena Gomez for the comedy ensemble, Behaving Badly directed by Tim Garrick. In 2015, he completed Sugar Mountain directed by Richard Gray; the drama We Don't Belong Here, opposite Anton Yelchin and Catherine Keener directed by Peer Pedersen, and Being Charlie which reunited Elwes with director Rob Reiner after 28 years and premiered at the Toronto International Film Festival. In 2016, Elwes starred opposite Penelope Cruz in Fernando Trueba's Spanish-language period pic The Queen of Spain, a sequel to Trueba's 1998 drama The Girl of Your Dreams. This also re-united Elwes with his Princess Bride co-star, Mandy Patinkin.
In October 2014 Touchstone (Simon & Schuster) published Elwes's memoir of the making of The Princess Bride, entitled As You Wish: Inconceivable Tales from the Making of The Princess Bride, which he co-wrote with Joe Layden. The book featured never-before-told stories, exclusive behind-the-scenes photographs, and interviews with co-stars Robin Wright, Wallace Shawn, Billy Crystal, Christopher Guest, Fred Savage and Mandy Patinkin, as well as screenwriter William Goldman, producer Norman Lear, and director Rob Reiner. The book debuted on The New York Times Best Seller list.
In 2014, Elwes co-wrote the screenplay for a film entitled Elvis & Nixon, about the pair's famous meeting at the White House in 1970. The film, which starred Michael Shannon and Kevin Spacey, was bought by Amazon as their first theatrical feature and was released on 22 April 2016. In May 2015, Elwes was cast as Arthur Davenport, a shrewd and eccentric world-class collector of illegal art and antiquities in Crackle's first streaming network series drama, The Art of More, which explored the cutthroat world of premium auction houses. The series debuted on 19 November and was picked up for a second season.
In April 2018 Elwes portrayed Larry Kline, Mayor of Hawkins, for the third season of the Netflix series Stranger Things, which premiered in July 2019. He was nominated along with the cast for the Screen Actors Guild Award for Outstanding Performance by an Ensemble in a Drama Series. In May 2019, he joined the third season of the Amazon series The Marvelous Mrs. Maisel as Gavin Hawk.
Elwes met photographer Lisa Marie Kurbikoff in 1991 at a chili cook-off in Malibu, California, and they became engaged in 1997. They married in 2000 and have one daughter together.
In March 2021, Elwes posted on his social media accounts that his younger sister Milica had died after battling Stage 4 cancer for more than a year.
Elwes is known for his feud with Republican Texas Senator and Princess Bride fan Ted Cruz. According to the Hollywood Reporter, Elwes initiated the 2020 fundraiser that re-united many Princess Bride cast members to support Joe Biden in the battleground state of Wisconsin. The Princess Bride Reunion raised more than $4 million for Wisconsin Democrats.
In August 2005, Elwes filed a lawsuit against Evolution Entertainment, his management firm and producer of Saw. Elwes said he was promised a minimum of one percent of the producers' net profits of the film and did not receive the full amount. The case was settled out of court. Elwes would not return to the series until 2010, where he reprised his role in Saw 3D. | [
{
"paragraph_id": 0,
"text": "Ivan Simon Cary Elwes (/ˈɛlwɪs/; born 26 October 1962) is an English actor. He is known for his leading film roles as Westley in The Princess Bride (1987), Robin Hood in Robin Hood: Men in Tights (1993), and Dr. Lawrence Gordon in the Saw film series.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Elwes' other performances in films include Glory (1989), Hot Shots! (1991), Days of Thunder (1990), Bram Stoker's Dracula (1992), Twister (1996), Kiss the Girls (1997), Liar Liar (1997), Shadow of the Vampire (2000), The Cat's Meow (2001), Ella Enchanted (2004), No Strings Attached (2011), BlackBerry (2023), and Mission: Impossible – Dead Reckoning Part One (2023).",
"title": ""
},
{
"paragraph_id": 2,
"text": "He has appeared on television in a number of series including The X-Files, Seinfeld, From the Earth to the Moon, Psych, and Life in Pieces. In 2019, he appeared in the Netflix drama series Stranger Things and the Amazon Prime comedy series The Marvelous Mrs. Maisel. Elwes has written a memoir of his time working on The Princess Bride called As You Wish, which was published in 2014.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Ivan Simon Cary Elwes was born on 26 October 1962 in Westminster, London. He is the youngest of three sons of portrait painter Dominic Elwes and interior designer and socialite Tessa Kennedy. Elwes is the brother of artist Damian Elwes and film producers Cassian Elwes and Milica Kastner. His stepfather, Elliott Kastner, was an American film producer and the first American to set up independent film production in the United Kingdom. His paternal grandfather was the portrait painter Simon Elwes, whose own father was the diplomat and tenor Gervase Elwes (1866–1921). Elwes has English, Irish, Scottish, Croatian-Jewish, and Serbian ancestry, the latter two from his maternal grandmother, Daška McLean, whose second husband, Billy McLean, was an operative for Special Operations Executive during World War II.",
"title": "Early life and education"
},
{
"paragraph_id": 4,
"text": "One of Elwes's relatives is the British miser John Elwes, who was the inspiration for Ebenezer Scrooge in A Christmas Carol (1843), having been referenced by Charles Dickens himself in chapter six of his last completed novel, Our Mutual Friend. Elwes himself played five roles in the 2009 film adaptation of A Christmas Carol. Through his maternal grandfather, Elwes is also related to Sir Alexander William \"Blackie\" Kennedy, one of the first photographers to document the archaeological site of Petra following the collapse of the Ottoman Empire.",
"title": "Early life and education"
},
{
"paragraph_id": 5,
"text": "Elwes was brought up as a Catholic and was an altar boy at Westminster Cathedral. His paternal relatives include such clerics as Dudley Charles Cary-Elwes (1868–1932), the Bishop of Northampton and Abbot Columba Cary-Elwes (Ampleforth Abbey, Saint Louis Abbey). He discussed this in an interview while he was filming the 2005 CBS television film Pope John Paul II, in which he played the young priest Karol Wojtyła.",
"title": "Early life and education"
},
{
"paragraph_id": 6,
"text": "Elwes's parents divorced when he was four years old. In 1975, when Elwes was 13, his father committed suicide. He was educated at Harrow School, and the London Academy of Music and Dramatic Art. In 1981, he moved to the United States to study acting at Sarah Lawrence College in Bronxville, New York. While living there, Elwes studied acting at both the Actors Studio and the Lee Strasberg Theatre and Film Institute under the tutelage of Al Pacino's mentor, Charlie Laughton (not to be confused with English actor Charles Laughton). As a teenager, he also worked as a production assistant on the films Absolution, Octopussy, and Superman, where he was assigned to Marlon Brando. When Elwes introduced himself to the actor, Brando insisted on calling him \"Rocky\" after Rocky Marciano.",
"title": "Early life and education"
},
{
"paragraph_id": 7,
"text": "Elwes made his acting debut in 1984 with Marek Kanievska's film Another Country, which was loosely based on the English boarding school exploits of British spies Burgess, Philby and MacLean. He played James Harcourt, a gay student. He went on to play Guilford Dudley in the British historical drama film Lady Jane, opposite Helena Bonham Carter. He was then cast as stable-boy-turned-swashbuckler Westley in Rob Reiner's fantasy-comedy The Princess Bride (1987), which was based on the novel of the same name by William Goldman. It was a modest box office success, but received critical acclaim. As a result of years of reviews, it earned a score of 97% on the review aggregation website Rotten Tomatoes. Since being released on home video and television, the film has become a cult classic.",
"title": "Career"
},
{
"paragraph_id": 8,
"text": "Initially the studio didn't know how to market it. Was it an adventure? A fantasy? A comedy? A romance? A kids' movie? In the end they sold it as a kids' movie and it largely had to rely on word of mouth ... people tell me they still have their VHS copy that has been passed down from one generation to the next.",
"title": "Career"
},
{
"paragraph_id": 9,
"text": "Elwes continued to work steadily, varying between dramatic roles, such as in the Oscar-winning Glory (1989), and comedic roles, as in Hot Shots! (1991). He played a rival driver to Tom Cruise in Days of Thunder (1990). In 1993, he starred as Robin Hood in Mel Brooks's comedy Robin Hood: Men in Tights. Elwes then appeared in supporting roles in such films as Francis Ford Coppola's adaptation of Bram Stoker's Dracula (1992), The Crush (1993), The Jungle Book (1994), Twister (1996), Liar Liar (1997), and Kiss the Girls. In 1999, he portrayed famed theatre and film producer John Houseman for Tim Robbins in his ensemble film based on Orson Welles's musical, Cradle Will Rock. Following that, he travelled to Luxembourg to work with John Malkovich and Willem Dafoe in Shadow of the Vampire.",
"title": "Career"
},
{
"paragraph_id": 10,
"text": "Elwes made his first television appearance in 1996 as David Lookner on Seinfeld. Two years later he played astronaut Michael Collins in the Golden Globe Award-winning HBO miniseries From the Earth To the Moon. The following year Elwes was nominated for a Golden Satellite Award for Best Performance by an Actor in a Mini-Series or Motion Picture Made for Television for his portrayal of Colonel James Burton in The Pentagon Wars directed by Richard Benjamin. In 1999, he guest starred as Dr. John York in an episode of the television series The Outer Limits.",
"title": "Career"
},
{
"paragraph_id": 11,
"text": "In 2001, he co-starred in Peter Bogdanovich's ensemble film The Cat's Meow portraying film mogul Thomas Ince, who died mysteriously while vacationing with William Randolph Hearst on his yacht. Shortly afterward he received another Golden Satellite Award nomination for his work on the ensemble NBC Television film Uprising opposite Jon Voight directed by Jon Avnet. Elwes had a recurring role in the final season (from 2001 to 2002) of Chris Carter's hit series The X-Files as FBI Assistant Director Brad Follmer. In 2003 Elwes portrayed Kerry Max Cook in the off-Broadway play The Exonerated in New York, directed by Bob Balaban (18–23 March 2003).",
"title": "Career"
},
{
"paragraph_id": 12,
"text": "In 2004, Elwes starred in the horror–thriller Saw which, at a budget of a little over $1 million, grossed over $100 million worldwide. The same year he appeared in Ella Enchanted, this time as the villain, not the hero. Also in 2004, he portrayed serial killer Ted Bundy in the A&E Network film The Riverman, which became one of the highest rated original films in the network's history and garnered a prestigious BANFF Rockie Award nomination. The following year, Elwes played the young Karol Wojtyła in the CBS television film Pope John Paul II. The TV film was highly successful not only in North America but also in Europe, where it broke box office records in the late Pope's native Poland and became the first film ever to break $1 million in three days. He made an uncredited appearance as Sam Green, the man who introduced Andy Warhol to Edie Sedgwick, in the 2006 film Factory Girl. In 2007, he appeared in Garry Marshall's Georgia Rule opposite Jane Fonda.",
"title": "Career"
},
{
"paragraph_id": 13,
"text": "In 2007, he made a guest appearance on the Law & Order: Special Victims Unit episode \"Dependent\" as a Mafia lawyer. In 2009, he played the role of Pierre Despereaux, an international art thief, in the fourth-season premiere of Psych. Also in 2009 Elwes joined the cast of Robert Zemeckis's motion capture adaptation of Charles Dickens' A Christmas Carol portraying five roles. That same year he was chosen by Steven Spielberg to appear in his motion capture adaptation of Belgian artist Hergé's popular comic strip The Adventures of Tintin: The Secret of the Unicorn. Elwes's voice-over work includes the narrator in James Patterson's audiobook The Jester, as well as characters in film and television animations such as Quest for Camelot, Pinky and The Brain, Batman Beyond, and the English versions of the Studio Ghibli films, Porco Rosso, Whisper of the Heart and The Cat Returns. For the 2004 video game The Bard's Tale, he served as screenwriter, improviser, and voice actor of the main character The Bard. In 2009, Elwes reunited with Jason Alexander for the Indian film, Delhi Safari. The following year Elwes portrayed the part of Gremlin Gus in Disney's video game, Epic Mickey 2: The Power of Two. In 2014, he appeared in Cosmos: A Spacetime Odyssey as the voice of scientists Edmond Halley and Robert Hooke.",
"title": "Career"
},
{
"paragraph_id": 14,
"text": "In 2010, he returned to the Saw franchise in Saw 3D (2010), the seventh film in the series, as Dr. Lawrence Gordon. In 2010, he returned to Psych, reprising his role in the second half of the fifth season, again in the show's sixth season, and again in the show's eighth season premiere. In 2014, Elwes played Hugh Ashmeade, Director of the CIA, in the second season of the BYUtv series Granite Flats. In 2011, he was selected by Ivan Reitman to star alongside Natalie Portman in No Strings Attached. That same year, Elwes and Garry Marshall teamed up again in the ensemble romantic comedy New Year's Eve opposite Robert de Niro and Halle Berry.",
"title": "Career"
},
{
"paragraph_id": 15,
"text": "In 2012, Elwes starred in the independent drama The Citizen. and the following year Elwes joined Selena Gomez for the comedy ensemble, Behaving Badly directed by Tim Garrick. In 2015, he completed Sugar Mountain directed by Richard Gray; the drama We Don't Belong Here, opposite Anton Yelchin and Catherine Keener directed by Peer Pedersen, and Being Charlie which reunited Elwes with director Rob Reiner after 28 years and premiered at the Toronto International Film Festival. In 2016, Elwes starred opposite Penelope Cruz in Fernando Trueba's Spanish-language period pic The Queen of Spain, a sequel to Trueba's 1998 drama The Girl of Your Dreams. This also re-united Elwes with his Princess Bride co-star, Mandy Patinkin.",
"title": "Career"
},
{
"paragraph_id": 16,
"text": "In October 2014 Touchstone (Simon & Schuster) published Elwes's memoir of the making of The Princess Bride, entitled As You Wish: Inconceivable Tales from the Making of The Princess Bride, which he co-wrote with Joe Layden. The book featured never-before-told stories, exclusive behind-the-scenes photographs, and interviews with co-stars Robin Wright, Wallace Shawn, Billy Crystal, Christopher Guest, Fred Savage and Mandy Patinkin, as well as screenwriter William Goldman, producer Norman Lear, and director Rob Reiner. The book debuted on The New York Times Best Seller list.",
"title": "Career"
},
{
"paragraph_id": 17,
"text": "In 2014, Elwes co-wrote the screenplay for a film entitled Elvis & Nixon, about the pair's famous meeting at the White House in 1970. The film, which starred Michael Shannon and Kevin Spacey, was bought by Amazon as their first theatrical feature and was released on 22 April 2016. In May 2015, Elwes was cast as Arthur Davenport, a shrewd and eccentric world-class collector of illegal art and antiquities in Crackle's first streaming network series drama, The Art of More, which explored the cutthroat world of premium auction houses. The series debuted on 19 November and was picked up for a second season.",
"title": "Career"
},
{
"paragraph_id": 18,
"text": "In April 2018 Elwes portrayed Larry Kline, Mayor of Hawkins, for the third season of the Netflix series Stranger Things, which premiered in July 2019. He was nominated along with the cast for the Screen Actors Guild Award for Outstanding Performance by an Ensemble in a Drama Series. In May 2019, he joined the third season of the Amazon series The Marvelous Mrs. Maisel as Gavin Hawk.",
"title": "Career"
},
{
"paragraph_id": 19,
"text": "Elwes met photographer Lisa Marie Kurbikoff in 1991 at a chili cook-off in Malibu, California, and they became engaged in 1997. They married in 2000 and have one daughter together.",
"title": "Personal life"
},
{
"paragraph_id": 20,
"text": "In March 2021, Elwes posted on his social media accounts that his younger sister Milica had died after battling Stage 4 cancer for more than a year.",
"title": "Personal life"
},
{
"paragraph_id": 21,
"text": "Elwes is known for his feud with Republican Texas Senator and Princess Bride fan Ted Cruz. According to the Hollywood Reporter, Elwes initiated the 2020 fundraiser that re-united many Princess Bride cast members to support Joe Biden in the battleground state of Wisconsin. The Princess Bride Reunion raised more than $4 million for Wisconsin Democrats.",
"title": "Personal life"
},
{
"paragraph_id": 22,
"text": "In August 2005, Elwes filed a lawsuit against Evolution Entertainment, his management firm and producer of Saw. Elwes said he was promised a minimum of one percent of the producers' net profits of the film and did not receive the full amount. The case was settled out of court. Elwes would not return to the series until 2010, where he reprised his role in Saw 3D.",
"title": "Personal life"
}
] | Ivan Simon Cary Elwes is an English actor. He is known for his leading film roles as Westley in The Princess Bride (1987), Robin Hood in Robin Hood: Men in Tights (1993), and Dr. Lawrence Gordon in the Saw film series. Elwes' other performances in films include Glory (1989), Hot Shots! (1991), Days of Thunder (1990), Bram Stoker's Dracula (1992), Twister (1996), Kiss the Girls (1997), Liar Liar (1997), Shadow of the Vampire (2000), The Cat's Meow (2001), Ella Enchanted (2004), No Strings Attached (2011), BlackBerry (2023), and Mission: Impossible – Dead Reckoning Part One (2023). He has appeared on television in a number of series including The X-Files, Seinfeld, From the Earth to the Moon, Psych, and Life in Pieces. In 2019, he appeared in the Netflix drama series Stranger Things and the Amazon Prime comedy series The Marvelous Mrs. Maisel. Elwes has written a memoir of his time working on The Princess Bride called As You Wish, which was published in 2014. | 2001-11-20T00:25:51Z | 2023-12-30T12:13:24Z | [
"Template:TableTBA",
"Template:Cite journal",
"Template:Cite web",
"Template:Commons category",
"Template:Cbignore",
"Template:Infobox person",
"Template:Dagger",
"Template:Sortname",
"Template:Cite news",
"Template:Short description",
"Template:Cn",
"Template:N/a",
"Template:Instagram",
"Template:Nom",
"Template:Cite book",
"Template:Reflist",
"Template:Cite video game",
"Template:Use dmy dates",
"Template:Use British English",
"Template:IPAc-en",
"Template:Blockquote",
"Template:IMDb name",
"Template:Authority control"
] | https://en.wikipedia.org/wiki/Cary_Elwes |
7,180 | Chris Sarandon | Christopher Sarandon (/səˈrændən/; born July 24, 1942) is an American actor. He is well known for playing a variety of iconic characters, including Jerry Dandrige in Fright Night (1985), Prince Humperdinck in The Princess Bride (1987), Detective Mike Norris in Child's Play (1988), and Jack Skellington in The Nightmare Before Christmas (1993). He was nominated for the Academy Award for Best Supporting Actor for his performance as Leon Shermer in Dog Day Afternoon (1975).
Chris Sarandon was born and raised in Beckley, West Virginia, the son of Greek-American restaurateurs Chris and Cliffie (née Cardullias) Sarandon. His father, whose surname was originally "Sarondonethes", was born to Greek parents in Istanbul, Turkey.
Sarandon graduated from Woodrow Wilson High School in Beckley. He earned a degree in speech at West Virginia University. He earned his master's degree in theater from The Catholic University of America (CUA) in Washington, D.C.
After graduation, he toured with numerous improvisational companies and became much involved with regional theatre, making his professional debut in the play The Rose Tattoo during 1965. In the summer of 1968 he and his then-wife, Susan Sarandon, worked as actors at the Wayside Theatre in Middletown, Virginia. Later that year Sarandon moved to New York City, where he obtained his first television role as Dr. Tom Halverson for the series The Guiding Light (1973–1974). He appeared in the primetime television movies The Satan Murders (1974) and Thursday's Game before obtaining the role in Dog Day Afternoon (1975), a performance which earned him nominations for Best New Male Star of the Year at the Golden Globes and the Academy Award for Best Supporting Actor.
Sarandon appeared in the Broadway play The Rothschilds and The Two Gentlemen of Verona, as well making regular appearances at numerous Shakespeare and George Bernard Shaw festivals in the United States and Canada. He also had a series of television roles, some of which (such as A Tale of Two Cities in 1980) corresponded to his affinity for the classics. He also had roles in the thriller movie Lipstick (1976) and as a demon in the movie The Sentinel (1977).
To avoid being typecast in villainous roles, Sarandon accepted various roles of other types during the years to come, portraying the title role of Christ in the made-for-television movie The Day Christ Died (1980). He received accolades for his portrayal of Sydney Carton in a TV-movie version of A Tale of Two Cities (1980), co-starred with Dennis Hopper in the 1983 movie The Osterman Weekend, which was based on the Robert Ludlum novel of the same name, and co-starred with Goldie Hawn in the movie Protocol (1984). These were followed by another mainstream success as the vampire-next-door in the horror movie Fright Night (1985). He starred in the 1986 TV movie Liberty, which addressed the making of New York City's Statue of Liberty.
One of his most endearing roles onscreen, is that of Prince Humperdinck in Rob Reiner's 1987 movie The Princess Bride, though he also has had supporting parts in many other successful films, including his lead turn in the original horror classic Child's Play (1988). In 1992, he played Joseph Curwen/Charles Dexter Ward in The Resurrected. He also played Jack Skellington, the main character of Tim Burton's animated Disney movie The Nightmare Before Christmas (1993), and has since reprised the role in other productions, including the Disney/Square video games Kingdom Hearts and Kingdom Hearts II and the Capcom sequel to the original movie, Oogie's Revenge. Sarandon also reprised his role as Jack Skellington for several Disneyland Halloween events and attractions including; Halloween Screams, the Frightfully Fun Parade, and the Haunted Mansion Holiday, a three-month overlay of the Haunted Mansion, where Jack and his friends take control of a mansion in an attempt to introduce Christmas, much as his character did in the movie.
Sarandon appeared in TV again with a recurring role as Dr. Burke on NBC's long-running medical drama ER.
In 1991 he performed on Broadway in the short-lived musical Nick & Nora (based on the movie The Thin Man) with Joanna Gleason, the daughter of Monty Hall. Sarandon married Gleason in 1994. They have appeared together in a number of movies, including Edie & Pen (1996), American Perfekt (1997), and Let the Devil Wear Black (1999). During the 2000s he made guest appearances in several TV series, notably as the Necromancer demon, Armand, in Charmed, and as superior court judge Barry Krumble for six episodes of Judging Amy.
In 2006 he played Signor Naccarelli in the six-time Tony award-winning Broadway musical play The Light in the Piazza at Lincoln Center. Most recently he appeared in Cyrano de Bergerac as Antoine de Guiche, with Kevin Kline, Jennifer Garner, and Daniel Sunjata.
In 2016 he performed in the Off-Broadway production of the Dave Malloy musical Preludes as Anton Chekhov, Tchaikovsky, Alexander Glazunov, Leo Tolstoy, Tsar Nicholas II, and The Master.
He is on the advisory board for the Greenbrier Valley Theatre in Lewisburg, West Virginia.
Sarandon has been married three times: he married actress Susan Sarandon in 1967. The two met while attending The Catholic University of America together in Washington, D.C. The marriage lasted for twelve years; the pair divorced in 1979. After divorcing from Susan, he married his second wife, fashion model Lisa Ann Cooper, in 1980. The couple had two daughters and one son: Stephanie (born 1982), Alexis (born 1984), and Michael (born 1988). After nine years, the marriage ended in divorce in 1989.
In 1994, he married his third wife, actress and singer Joanna Gleason. The couple met while performing in Broadway's short-lived 1991 musical Nick & Nora; they returned to the stage together in 1998's Thorn and Bloom. They also collaborated in several films together, such as Road Ends, Edie & Pen, Let the Devil Wear Black, and American Perfekt.
Sarandon is a member of the Greek Orthodox Church. | [
{
"paragraph_id": 0,
"text": "Christopher Sarandon (/səˈrændən/; born July 24, 1942) is an American actor. He is well known for playing a variety of iconic characters, including Jerry Dandrige in Fright Night (1985), Prince Humperdinck in The Princess Bride (1987), Detective Mike Norris in Child's Play (1988), and Jack Skellington in The Nightmare Before Christmas (1993). He was nominated for the Academy Award for Best Supporting Actor for his performance as Leon Shermer in Dog Day Afternoon (1975).",
"title": ""
},
{
"paragraph_id": 1,
"text": "Chris Sarandon was born and raised in Beckley, West Virginia, the son of Greek-American restaurateurs Chris and Cliffie (née Cardullias) Sarandon. His father, whose surname was originally \"Sarondonethes\", was born to Greek parents in Istanbul, Turkey.",
"title": "Early life"
},
{
"paragraph_id": 2,
"text": "Sarandon graduated from Woodrow Wilson High School in Beckley. He earned a degree in speech at West Virginia University. He earned his master's degree in theater from The Catholic University of America (CUA) in Washington, D.C.",
"title": "Early life"
},
{
"paragraph_id": 3,
"text": "After graduation, he toured with numerous improvisational companies and became much involved with regional theatre, making his professional debut in the play The Rose Tattoo during 1965. In the summer of 1968 he and his then-wife, Susan Sarandon, worked as actors at the Wayside Theatre in Middletown, Virginia. Later that year Sarandon moved to New York City, where he obtained his first television role as Dr. Tom Halverson for the series The Guiding Light (1973–1974). He appeared in the primetime television movies The Satan Murders (1974) and Thursday's Game before obtaining the role in Dog Day Afternoon (1975), a performance which earned him nominations for Best New Male Star of the Year at the Golden Globes and the Academy Award for Best Supporting Actor.",
"title": "Career"
},
{
"paragraph_id": 4,
"text": "Sarandon appeared in the Broadway play The Rothschilds and The Two Gentlemen of Verona, as well making regular appearances at numerous Shakespeare and George Bernard Shaw festivals in the United States and Canada. He also had a series of television roles, some of which (such as A Tale of Two Cities in 1980) corresponded to his affinity for the classics. He also had roles in the thriller movie Lipstick (1976) and as a demon in the movie The Sentinel (1977).",
"title": "Career"
},
{
"paragraph_id": 5,
"text": "To avoid being typecast in villainous roles, Sarandon accepted various roles of other types during the years to come, portraying the title role of Christ in the made-for-television movie The Day Christ Died (1980). He received accolades for his portrayal of Sydney Carton in a TV-movie version of A Tale of Two Cities (1980), co-starred with Dennis Hopper in the 1983 movie The Osterman Weekend, which was based on the Robert Ludlum novel of the same name, and co-starred with Goldie Hawn in the movie Protocol (1984). These were followed by another mainstream success as the vampire-next-door in the horror movie Fright Night (1985). He starred in the 1986 TV movie Liberty, which addressed the making of New York City's Statue of Liberty.",
"title": "Career"
},
{
"paragraph_id": 6,
"text": "One of his most endearing roles onscreen, is that of Prince Humperdinck in Rob Reiner's 1987 movie The Princess Bride, though he also has had supporting parts in many other successful films, including his lead turn in the original horror classic Child's Play (1988). In 1992, he played Joseph Curwen/Charles Dexter Ward in The Resurrected. He also played Jack Skellington, the main character of Tim Burton's animated Disney movie The Nightmare Before Christmas (1993), and has since reprised the role in other productions, including the Disney/Square video games Kingdom Hearts and Kingdom Hearts II and the Capcom sequel to the original movie, Oogie's Revenge. Sarandon also reprised his role as Jack Skellington for several Disneyland Halloween events and attractions including; Halloween Screams, the Frightfully Fun Parade, and the Haunted Mansion Holiday, a three-month overlay of the Haunted Mansion, where Jack and his friends take control of a mansion in an attempt to introduce Christmas, much as his character did in the movie.",
"title": "Career"
},
{
"paragraph_id": 7,
"text": "Sarandon appeared in TV again with a recurring role as Dr. Burke on NBC's long-running medical drama ER.",
"title": "Career"
},
{
"paragraph_id": 8,
"text": "In 1991 he performed on Broadway in the short-lived musical Nick & Nora (based on the movie The Thin Man) with Joanna Gleason, the daughter of Monty Hall. Sarandon married Gleason in 1994. They have appeared together in a number of movies, including Edie & Pen (1996), American Perfekt (1997), and Let the Devil Wear Black (1999). During the 2000s he made guest appearances in several TV series, notably as the Necromancer demon, Armand, in Charmed, and as superior court judge Barry Krumble for six episodes of Judging Amy.",
"title": "Career"
},
{
"paragraph_id": 9,
"text": "In 2006 he played Signor Naccarelli in the six-time Tony award-winning Broadway musical play The Light in the Piazza at Lincoln Center. Most recently he appeared in Cyrano de Bergerac as Antoine de Guiche, with Kevin Kline, Jennifer Garner, and Daniel Sunjata.",
"title": "Career"
},
{
"paragraph_id": 10,
"text": "In 2016 he performed in the Off-Broadway production of the Dave Malloy musical Preludes as Anton Chekhov, Tchaikovsky, Alexander Glazunov, Leo Tolstoy, Tsar Nicholas II, and The Master.",
"title": "Career"
},
{
"paragraph_id": 11,
"text": "He is on the advisory board for the Greenbrier Valley Theatre in Lewisburg, West Virginia.",
"title": "Career"
},
{
"paragraph_id": 12,
"text": "Sarandon has been married three times: he married actress Susan Sarandon in 1967. The two met while attending The Catholic University of America together in Washington, D.C. The marriage lasted for twelve years; the pair divorced in 1979. After divorcing from Susan, he married his second wife, fashion model Lisa Ann Cooper, in 1980. The couple had two daughters and one son: Stephanie (born 1982), Alexis (born 1984), and Michael (born 1988). After nine years, the marriage ended in divorce in 1989.",
"title": "Personal life"
},
{
"paragraph_id": 13,
"text": "In 1994, he married his third wife, actress and singer Joanna Gleason. The couple met while performing in Broadway's short-lived 1991 musical Nick & Nora; they returned to the stage together in 1998's Thorn and Bloom. They also collaborated in several films together, such as Road Ends, Edie & Pen, Let the Devil Wear Black, and American Perfekt.",
"title": "Personal life"
},
{
"paragraph_id": 14,
"text": "Sarandon is a member of the Greek Orthodox Church.",
"title": "Personal life"
}
] | Christopher Sarandon is an American actor. He is well known for playing a variety of iconic characters, including Jerry Dandrige in Fright Night (1985), Prince Humperdinck in The Princess Bride (1987), Detective Mike Norris in Child's Play (1988), and Jack Skellington in The Nightmare Before Christmas (1993). He was nominated for the Academy Award for Best Supporting Actor for his performance as Leon Shermer in Dog Day Afternoon (1975). | 2001-11-20T00:26:13Z | 2023-12-29T03:57:47Z | [
"Template:Short description",
"Template:Infobox person",
"Template:Reflist",
"Template:Cite web",
"Template:Commons category",
"Template:IPAc-en",
"Template:Nom",
"Template:Cite news",
"Template:Cite video game",
"Template:IMDb name",
"Template:IBDB name",
"Template:Iobdb name",
"Template:Authority control"
] | https://en.wikipedia.org/wiki/Chris_Sarandon |
7,182 | Christopher Guest | Christopher Haden-Guest, 5th Baron Haden-Guest (born 5 February 1948), known professionally as Christopher Guest, is an American-British screenwriter and director. Guest has written, directed, and starred in his series of comedy films shot in mockumentary style. The series of films began with This Is Spinal Tap (which he did not direct) and continued with Waiting for Guffman, Best in Show, A Mighty Wind, For Your Consideration, and Mascots.
Guest holds a hereditary British peerage as the 5th Baron Haden-Guest, and has publicly expressed a desire to see the House of Lords reformed as a democratically elected chamber. Though he was initially active in the Lords, his career there was cut short by the House of Lords Act 1999, which removed the right of most hereditary peers to a seat in the parliament. When using his title, he is normally styled as Lord Haden-Guest. Guest is married to the actress Jamie Lee Curtis.
Guest was born in New York City, the son of Peter Haden-Guest, a British United Nations diplomat who later became the 4th Baron Haden-Guest, and his second wife, the former Jean Pauline Hindes, an American former vice president of casting at CBS. Guest's paternal grandfather, Leslie, Baron Haden-Guest, was a Labour Party politician, who was a convert to Judaism. Guest's paternal grandmother, a descendant of the Dutch Jewish Goldsmid family, was the daughter of Colonel Albert Goldsmid, a British officer who founded the Jewish Lads' and Girls' Brigade and the Maccabaeans. Guest's maternal grandparents were Jewish emigrants from Russia. Both of Guest's parents had become atheists, and Guest himself had no religious upbringing. In 1938, his uncle, David Guest, a lecturer and Communist Party member, was killed in the Spanish Civil War, fighting in the International Brigades.
Guest spent parts of his childhood in his father's native United Kingdom. He attended the High School of Music & Art (New York City), studying classical music (clarinet) at the Stockbridge School in the village of Interlaken in Stockbridge, Massachusetts. He later took up the mandolin, became interested in country music, and played guitar with Arlo Guthrie, a fellow student at Stockbridge School. Guest later began performing with bluegrass bands until he took up rock and roll. Guest went to Bard College for a year and then studied acting at New York University's Graduate Acting Program at the Tisch School of the Arts, graduating in 1971.
Guest began his career in theatre during the early 1970s with one of his earliest professional performances being the role of Norman in Michael Weller's Moonchildren for the play's American premiere at the Arena Stage in Washington, DC, in November 1971. Guest continued with the production when it moved to Broadway in 1972. The following year, he began making contributions to The National Lampoon Radio Hour for a variety of National Lampoon audio recordings. He both performed comic characters (Flash Bazbo—Space Explorer, Mr. Rogers, music critic Roger de Swans, and sleazy record company rep Ron Fields) and wrote, arranged, and performed numerous musical parodies (of Bob Dylan, James Taylor, and others). He was featured alongside Chevy Chase and John Belushi in the off-Broadway revue National Lampoon's Lemmings. Two of his earliest film roles were small parts as uniformed police officers in the 1972 film The Hot Rock and 1974's Death Wish.
Guest played a small role in the 1977 All in the Family episode "Mike and Gloria Meet", where in a flashback sequence Mike and Gloria recall their first blind date, set up by Michael's college buddy Jim (Guest), who dated Gloria's girlfriend Debbie (Priscilla Lopez).
Guest also had a small but important role in it Happened One Christmas, the 1977 gender-reversed TV remake of the Frank Capra classic it's a Wonderful Life, starring Marlo Thomas as Mary Bailey (the Jimmy Stewart role), with Cloris Leachman as Mary's guardian angel and Orson Welles as the villainous Mr. Potter. Guest played Mary's brother Harry, who returned from the Army in the final scene, speaking one of the last lines of the film: "A toast! To my big sister Mary, the richest person in town!"
Guest's biggest role of the first two decades of his career is likely that of Nigel Tufnel in the 1984 Rob Reiner film This Is Spinal Tap. Guest made his first appearance as Tufnel on the 1978 sketch comedy program The TV Show.
Along with Martin Short, Billy Crystal, and Harry Shearer, Guest was hired as a one-year-only cast member for the 1984–85 season on NBC's Saturday Night Live. Recurring characters on SNL played by Guest include Frankie, of Willie and Frankie (coworkers who recount in detail physically painful situations in which they have found themselves, remarking laconically "I hate when that happens"); Herb Minkman, a shady novelty toymaker with a brother named Al (played by Crystal); Rajeev Vindaloo, an eccentric foreign man in the same vein as Andy Kaufman's Latka character from Taxi; and Señor Cosa, a Spanish ventriloquist often seen on the recurring spoof of The Joe Franklin Show. He also experimented behind the camera with pre-filmed sketches, notably directing a documentary-style short starring Shearer and Short as synchronized swimmers. In another short film from SNL, Guest and Crystal appear in blackface as retired Negro league baseball players, "The Rooster and the King".
He appeared as Count Rugen (the "six-fingered man") in The Princess Bride. He had a cameo role as the first customer, a pedestrian, in the 1986 musical remake of The Little Shop of Horrors, which also featured Steve Martin. As a co-writer and director, Guest made the Hollywood satire The Big Picture.
Upon his father succeeding to the family peerage in 1987, he was known as "the Hon. Christopher Haden-Guest". This was his official style and name until he inherited the barony in 1996.
The experience of making This is Spinal Tap directly informed the second phase of his career. Starting in 1996, Guest began writing, directing, and acting in his own series of substantially improvised films. Many of them are considered definitive examples of what came to be known as "mockumentaries"—not a term Guest appreciates.
Together, Guest, his frequent writing partner Eugene Levy, and a small band of actors have formed a loose repertory group, which appears in several films. These include Catherine O'Hara, Michael McKean, Parker Posey, Bob Balaban, Jane Lynch, John Michael Higgins, Harry Shearer, Jennifer Coolidge, Ed Begley, Jr., Jim Piddock and Fred Willard. Guest and Levy write backgrounds for each of the characters and notecards for each specific scene, outlining the plot, and then leave it up to the actors to improvise the dialogue, which is supposed to result in a much more natural conversation than scripted dialogue would. Typically, everyone who appears in these movies receives the same fee and the same portion of profits. Among the films performed in this manner, which have been written and directed by Guest, include Waiting for Guffman (1996), about a community theatre group, Best in Show (2000), about the dog show circuit, A Mighty Wind (2003), about folk singers, For Your Consideration (2006), about the hype surrounding Oscar season, and Mascots (2016), about a sports team mascot competition.
Guest had a guest voice-over role in the animated comedy series SpongeBob SquarePants as SpongeBob's cousin, Stanley.
Guest again collaborated with Reiner in A Few Good Men (1992), appearing as Dr. Stone. In the 2000s, Guest appeared in the 2005 biographical musical Mrs Henderson Presents and in the 2009 comedy The Invention of Lying.
He is also currently a member of the musical group The Beyman Bros, which he formed with childhood friend David Nichtern and Spinal Tap's current keyboardist C. J. Vanston. Their debut album Memories of Summer as a Child was released on January 20, 2009.
In 2010, the United States Census Bureau paid $2.5 million to have a television commercial directed by Guest shown during television coverage of Super Bowl XLIV.
Guest holds an honorary doctorate from and is a member of the board of trustees for Berklee College of Music in Boston.
In 2013, Guest was the co-writer and producer of the HBO series Family Tree, in collaboration with Jim Piddock, a lighthearted story in the style he made famous in This is Spinal Tap, in which the main character, Tom Chadwick, inherits a box of curios from his great aunt, spurring interest in his ancestry.
On August 11, 2015, Netflix announced that Mascots, a film directed by Guest and co-written with Jim Piddock, about the competition for the World Mascot Association championship's Gold Fluffy Award, would debut in 2016.
Guest replayed his role as Count Tyrone Rugen in the Princess Bride Reunion on September 13, 2020.
Guest became the 5th Baron Haden-Guest, of Great Saling, in the County of Essex, when his father died in 1996. He succeeded upon the ineligibility of his older half-brother, Anthony Haden-Guest, who was born before his parents married. According to an article in The Guardian, Guest attended the House of Lords regularly until the House of Lords Act 1999 barred most hereditary peers from their seats. In the article Guest remarked:
"There's no question that the old system was unfair. I mean, why should you be born to this? But now it's all just sheer cronyism. The prime minister can put in whoever he wants and bus them in to vote. The Upper House should be an elected body, it's that simple."
Guest married actress Jamie Lee Curtis in 1984 at the home of their mutual friend Rob Reiner. They have two daughters, through adoption. Guest was played by Seth Green in the film A Futile and Stupid Gesture.
Guest has worked multiple times with certain actors, notably with frequent writing partner Eugene Levy, who has appeared in five of his projects. Other repeat collaborators of Guest include Fred Willard (7 projects); Michael McKean, Bob Balaban, and Ed Begley, Jr. (6 projects each); Parker Posey, Jim Piddock, Michael Hitchcock and Harry Shearer (5 projects each); Catherine O'Hara, Larry Miller, John Michael Higgins, Jane Lynch, and Jennifer Coolidge (4 projects each). | [
{
"paragraph_id": 0,
"text": "Christopher Haden-Guest, 5th Baron Haden-Guest (born 5 February 1948), known professionally as Christopher Guest, is an American-British screenwriter and director. Guest has written, directed, and starred in his series of comedy films shot in mockumentary style. The series of films began with This Is Spinal Tap (which he did not direct) and continued with Waiting for Guffman, Best in Show, A Mighty Wind, For Your Consideration, and Mascots.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Guest holds a hereditary British peerage as the 5th Baron Haden-Guest, and has publicly expressed a desire to see the House of Lords reformed as a democratically elected chamber. Though he was initially active in the Lords, his career there was cut short by the House of Lords Act 1999, which removed the right of most hereditary peers to a seat in the parliament. When using his title, he is normally styled as Lord Haden-Guest. Guest is married to the actress Jamie Lee Curtis.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Guest was born in New York City, the son of Peter Haden-Guest, a British United Nations diplomat who later became the 4th Baron Haden-Guest, and his second wife, the former Jean Pauline Hindes, an American former vice president of casting at CBS. Guest's paternal grandfather, Leslie, Baron Haden-Guest, was a Labour Party politician, who was a convert to Judaism. Guest's paternal grandmother, a descendant of the Dutch Jewish Goldsmid family, was the daughter of Colonel Albert Goldsmid, a British officer who founded the Jewish Lads' and Girls' Brigade and the Maccabaeans. Guest's maternal grandparents were Jewish emigrants from Russia. Both of Guest's parents had become atheists, and Guest himself had no religious upbringing. In 1938, his uncle, David Guest, a lecturer and Communist Party member, was killed in the Spanish Civil War, fighting in the International Brigades.",
"title": "Early life"
},
{
"paragraph_id": 3,
"text": "Guest spent parts of his childhood in his father's native United Kingdom. He attended the High School of Music & Art (New York City), studying classical music (clarinet) at the Stockbridge School in the village of Interlaken in Stockbridge, Massachusetts. He later took up the mandolin, became interested in country music, and played guitar with Arlo Guthrie, a fellow student at Stockbridge School. Guest later began performing with bluegrass bands until he took up rock and roll. Guest went to Bard College for a year and then studied acting at New York University's Graduate Acting Program at the Tisch School of the Arts, graduating in 1971.",
"title": "Early life"
},
{
"paragraph_id": 4,
"text": "Guest began his career in theatre during the early 1970s with one of his earliest professional performances being the role of Norman in Michael Weller's Moonchildren for the play's American premiere at the Arena Stage in Washington, DC, in November 1971. Guest continued with the production when it moved to Broadway in 1972. The following year, he began making contributions to The National Lampoon Radio Hour for a variety of National Lampoon audio recordings. He both performed comic characters (Flash Bazbo—Space Explorer, Mr. Rogers, music critic Roger de Swans, and sleazy record company rep Ron Fields) and wrote, arranged, and performed numerous musical parodies (of Bob Dylan, James Taylor, and others). He was featured alongside Chevy Chase and John Belushi in the off-Broadway revue National Lampoon's Lemmings. Two of his earliest film roles were small parts as uniformed police officers in the 1972 film The Hot Rock and 1974's Death Wish.",
"title": "Career"
},
{
"paragraph_id": 5,
"text": "Guest played a small role in the 1977 All in the Family episode \"Mike and Gloria Meet\", where in a flashback sequence Mike and Gloria recall their first blind date, set up by Michael's college buddy Jim (Guest), who dated Gloria's girlfriend Debbie (Priscilla Lopez).",
"title": "Career"
},
{
"paragraph_id": 6,
"text": "Guest also had a small but important role in it Happened One Christmas, the 1977 gender-reversed TV remake of the Frank Capra classic it's a Wonderful Life, starring Marlo Thomas as Mary Bailey (the Jimmy Stewart role), with Cloris Leachman as Mary's guardian angel and Orson Welles as the villainous Mr. Potter. Guest played Mary's brother Harry, who returned from the Army in the final scene, speaking one of the last lines of the film: \"A toast! To my big sister Mary, the richest person in town!\"",
"title": "Career"
},
{
"paragraph_id": 7,
"text": "Guest's biggest role of the first two decades of his career is likely that of Nigel Tufnel in the 1984 Rob Reiner film This Is Spinal Tap. Guest made his first appearance as Tufnel on the 1978 sketch comedy program The TV Show.",
"title": "Career"
},
{
"paragraph_id": 8,
"text": "Along with Martin Short, Billy Crystal, and Harry Shearer, Guest was hired as a one-year-only cast member for the 1984–85 season on NBC's Saturday Night Live. Recurring characters on SNL played by Guest include Frankie, of Willie and Frankie (coworkers who recount in detail physically painful situations in which they have found themselves, remarking laconically \"I hate when that happens\"); Herb Minkman, a shady novelty toymaker with a brother named Al (played by Crystal); Rajeev Vindaloo, an eccentric foreign man in the same vein as Andy Kaufman's Latka character from Taxi; and Señor Cosa, a Spanish ventriloquist often seen on the recurring spoof of The Joe Franklin Show. He also experimented behind the camera with pre-filmed sketches, notably directing a documentary-style short starring Shearer and Short as synchronized swimmers. In another short film from SNL, Guest and Crystal appear in blackface as retired Negro league baseball players, \"The Rooster and the King\".",
"title": "Career"
},
{
"paragraph_id": 9,
"text": "He appeared as Count Rugen (the \"six-fingered man\") in The Princess Bride. He had a cameo role as the first customer, a pedestrian, in the 1986 musical remake of The Little Shop of Horrors, which also featured Steve Martin. As a co-writer and director, Guest made the Hollywood satire The Big Picture.",
"title": "Career"
},
{
"paragraph_id": 10,
"text": "Upon his father succeeding to the family peerage in 1987, he was known as \"the Hon. Christopher Haden-Guest\". This was his official style and name until he inherited the barony in 1996.",
"title": "Career"
},
{
"paragraph_id": 11,
"text": "The experience of making This is Spinal Tap directly informed the second phase of his career. Starting in 1996, Guest began writing, directing, and acting in his own series of substantially improvised films. Many of them are considered definitive examples of what came to be known as \"mockumentaries\"—not a term Guest appreciates.",
"title": "Career"
},
{
"paragraph_id": 12,
"text": "Together, Guest, his frequent writing partner Eugene Levy, and a small band of actors have formed a loose repertory group, which appears in several films. These include Catherine O'Hara, Michael McKean, Parker Posey, Bob Balaban, Jane Lynch, John Michael Higgins, Harry Shearer, Jennifer Coolidge, Ed Begley, Jr., Jim Piddock and Fred Willard. Guest and Levy write backgrounds for each of the characters and notecards for each specific scene, outlining the plot, and then leave it up to the actors to improvise the dialogue, which is supposed to result in a much more natural conversation than scripted dialogue would. Typically, everyone who appears in these movies receives the same fee and the same portion of profits. Among the films performed in this manner, which have been written and directed by Guest, include Waiting for Guffman (1996), about a community theatre group, Best in Show (2000), about the dog show circuit, A Mighty Wind (2003), about folk singers, For Your Consideration (2006), about the hype surrounding Oscar season, and Mascots (2016), about a sports team mascot competition.",
"title": "Career"
},
{
"paragraph_id": 13,
"text": "Guest had a guest voice-over role in the animated comedy series SpongeBob SquarePants as SpongeBob's cousin, Stanley.",
"title": "Career"
},
{
"paragraph_id": 14,
"text": "Guest again collaborated with Reiner in A Few Good Men (1992), appearing as Dr. Stone. In the 2000s, Guest appeared in the 2005 biographical musical Mrs Henderson Presents and in the 2009 comedy The Invention of Lying.",
"title": "Career"
},
{
"paragraph_id": 15,
"text": "He is also currently a member of the musical group The Beyman Bros, which he formed with childhood friend David Nichtern and Spinal Tap's current keyboardist C. J. Vanston. Their debut album Memories of Summer as a Child was released on January 20, 2009.",
"title": "Career"
},
{
"paragraph_id": 16,
"text": "In 2010, the United States Census Bureau paid $2.5 million to have a television commercial directed by Guest shown during television coverage of Super Bowl XLIV.",
"title": "Career"
},
{
"paragraph_id": 17,
"text": "Guest holds an honorary doctorate from and is a member of the board of trustees for Berklee College of Music in Boston.",
"title": "Career"
},
{
"paragraph_id": 18,
"text": "In 2013, Guest was the co-writer and producer of the HBO series Family Tree, in collaboration with Jim Piddock, a lighthearted story in the style he made famous in This is Spinal Tap, in which the main character, Tom Chadwick, inherits a box of curios from his great aunt, spurring interest in his ancestry.",
"title": "Career"
},
{
"paragraph_id": 19,
"text": "On August 11, 2015, Netflix announced that Mascots, a film directed by Guest and co-written with Jim Piddock, about the competition for the World Mascot Association championship's Gold Fluffy Award, would debut in 2016.",
"title": "Career"
},
{
"paragraph_id": 20,
"text": "Guest replayed his role as Count Tyrone Rugen in the Princess Bride Reunion on September 13, 2020.",
"title": "Career"
},
{
"paragraph_id": 21,
"text": "Guest became the 5th Baron Haden-Guest, of Great Saling, in the County of Essex, when his father died in 1996. He succeeded upon the ineligibility of his older half-brother, Anthony Haden-Guest, who was born before his parents married. According to an article in The Guardian, Guest attended the House of Lords regularly until the House of Lords Act 1999 barred most hereditary peers from their seats. In the article Guest remarked:",
"title": "Family"
},
{
"paragraph_id": 22,
"text": "\"There's no question that the old system was unfair. I mean, why should you be born to this? But now it's all just sheer cronyism. The prime minister can put in whoever he wants and bus them in to vote. The Upper House should be an elected body, it's that simple.\"",
"title": "Family"
},
{
"paragraph_id": 23,
"text": "Guest married actress Jamie Lee Curtis in 1984 at the home of their mutual friend Rob Reiner. They have two daughters, through adoption. Guest was played by Seth Green in the film A Futile and Stupid Gesture.",
"title": "Family"
},
{
"paragraph_id": 24,
"text": "Guest has worked multiple times with certain actors, notably with frequent writing partner Eugene Levy, who has appeared in five of his projects. Other repeat collaborators of Guest include Fred Willard (7 projects); Michael McKean, Bob Balaban, and Ed Begley, Jr. (6 projects each); Parker Posey, Jim Piddock, Michael Hitchcock and Harry Shearer (5 projects each); Catherine O'Hara, Larry Miller, John Michael Higgins, Jane Lynch, and Jennifer Coolidge (4 projects each).",
"title": "Filmography"
}
] | Christopher Haden-Guest, 5th Baron Haden-Guest, known professionally as Christopher Guest, is an American-British screenwriter and director. Guest has written, directed, and starred in his series of comedy films shot in mockumentary style. The series of films began with This Is Spinal Tap and continued with Waiting for Guffman, Best in Show, A Mighty Wind, For Your Consideration, and Mascots. Guest holds a hereditary British peerage as the 5th Baron Haden-Guest, and has publicly expressed a desire to see the House of Lords reformed as a democratically elected chamber. Though he was initially active in the Lords, his career there was cut short by the House of Lords Act 1999, which removed the right of most hereditary peers to a seat in the parliament. When using his title, he is normally styled as Lord Haden-Guest. Guest is married to the actress Jamie Lee Curtis. | 2001-11-20T00:28:07Z | 2023-12-29T00:18:46Z | [
"Template:Short description",
"Template:S-media",
"Template:S-aft",
"Template:Cite book",
"Template:IBDB name",
"Template:S-end",
"Template:For",
"Template:Diagonal split header",
"Template:Cite magazine",
"Template:S-start",
"Template:Cite journal",
"Template:IMDb name",
"Template:Infobox officeholder",
"Template:Won",
"Template:Cite web",
"Template:Current barons in the Peerage of the United Kingdom",
"Template:Reflist",
"Template:Commons category",
"Template:S-bef",
"Template:Use mdy dates",
"Template:Fact",
"Template:Blockquote",
"Template:Yes",
"Template:No",
"Template:S-reg",
"Template:Navboxes",
"Template:Spinal Tap",
"Template:Authority control",
"Template:Cite news",
"Template:Iobdb name",
"Template:S-inc",
"Template:Christopher Guest",
"Template:Ya",
"Template:Nom",
"Template:S-ttl"
] | https://en.wikipedia.org/wiki/Christopher_Guest |
7,183 | Carol Kane | Carolyn Laurie Kane (born June 18, 1952) is an American actress. She gained recognition for her role in Hester Street (1975), for which she received an Academy Award nomination for Best Actress. She became known in the 1970s and 1980s in films such as Dog Day Afternoon (1975), Annie Hall (1977), The Princess Bride (1987), and Scrooged (1988).
Kane appeared on the television series Taxi in the early 1980s, as Simka Gravas, the wife of Latka, the character played by Andy Kaufman, winning two Emmy Awards for her work. She has played the character of Madame Morrible in the musical Wicked, both in touring productions and on Broadway from 2005 to 2014. From 2015 to 2020, she was a main cast member on the Netflix series Unbreakable Kimmy Schmidt, in which she played Lillian Kaushtupper. She currently plays the recurring role of Pelia in Star Trek: Strange New Worlds (2023–present).
Kane was born on June 18, 1952, in Cleveland, Ohio, the daughter of Joy, a jazz singer, teacher, dancer, and pianist, and architect Michael Kane. Her family is Jewish, and her grandparents emigrated from Russia, Austria, and Poland. Due to her father's occupation, Kane moved frequently as a child; she briefly lived in Paris at age 8, where she began learning to speak French. Additionally, she resided in Haiti at age 10, where she has recalled often feeling fearful due to extensive government surveillance under François "Papa Doc" Duvalier's rule. Her parents divorced when she was 12 years old.
She attended the Cherry Lawn School, a boarding school in Darien, Connecticut, until 1965. She studied theater at HB Studio and also went to the Professional Children's School in New York City. She became a member of both the Screen Actors Guild and the Actors' Equity Association at age 14. Kane made her professional theater debut in a 1966 production of The Prime of Miss Jean Brodie starring Tammy Grimes, her first job as a member of Actors' Equity.
Kane's on-screen career began while she was still a teenager, when she appeared in minor roles in films such as Desperate Characters and Mike Nichols's Carnal Knowledge in 1971, the latter of which led her to befriend lead actor Jack Nicholson. In 1972, she was cast in her first leading role in the Canadian production Wedding in White, where she played a teenage rape victim who is forced into marriage by her father. She also appeared as a sex worker in Hal Ashby's 1973 film The Last Detail, where she collaborated with Nicholson yet again.
In 1975, Kane was cast in Joan Micklin Silver's feature-length debut Hester Street, in which she played a Russian-Jewish immigrant who struggles with her husband to assimilate in early 20th-century New York. For her performance in the film, Kane garnered her sole Academy Award nomination for Best Actress at the 48th Academy Awards, and it remains her favorite of all her roles. Additionally, 1975 saw her appear as a bank teller in Sidney Lumet's crime drama Dog Day Afternoon, which received numerous Academy Award nominations in other categories that same year. This also marked her first on-screen collaboration with Al Pacino, whom she had known prior to the film thanks to their shared background in theater.
Despite this recognition, however, Kane has recounted waiting for approximately a year before being cast in her next role, which she has attributed to the trend of actors being typecast after receiving awards attention. Her return to the screen would come with Gene Wilder's 1977 comedy The World's Greatest Lover, which she has credited for identifying the comedic talents that would become her staple in later years. During the same year, she was cast in Woody Allen's romantic comedy Annie Hall, where she played Allison Portchnik, the first wife of Allen's character Alvy Singer. She also appeared in Ken Russell's film Valentino, which, like The World's Greatest Lover, takes inspiration from the silent film era, as it is a biographical drama loosely inspired by the life of Rudolph Valentino.
After this, Kane appeared in the horror films The Mafu Cage (1978) and When a Stranger Calls (1979); ironically, Kane herself is largely averse to horror, and she admits to being unable to watch the latter. In 1979, she also appeared in a cameo role in The Muppet Movie.
From 1980 to 1983, Kane portrayed Simka Dahblitz-Gravas, the wife of Andy Kaufman's character Latka Gravas, on the American television series Taxi. Kane has attributed the on-screen rapport she shared with Kaufman to their different work ethics: where she was trained in the theater and enjoyed rehearsal time, Kaufman was rooted more in stand-up comedy and did not care for rehearsals, a contrast that she believes enhanced their believability as a married couple. However, she maintains that she and Kaufman had a loving relationship on set, and she has spoken fondly of him in retrospective interviews. Kane received two Emmy Awards for her work on Taxi. Her role on the series has largely been credited as the beginning of her pivot towards more comedic roles, as she began to regularly appear in sitcoms and comedy films after the series ended.
In 1984, Kane appeared in episode 12, season 3 of Cheers as Amanda, an acquaintance of Diane Chambers from her time spent in a mental institution. She was also a regular on the 1986 series All Is Forgiven.
In 1987, Kane appeared in Ishtar, Elaine May's notorious box-office flop turned cult classic, playing the frustrated girlfriend of Dustin Hoffman's character. That year also saw her make one of her most recognizable film appearances in Rob Reiner's fantasy romance The Princess Bride, where she played a witch opposite Billy Crystal. In 1988, Kane appeared in the Cinemax Comedy Experiment Rap Master Ronnie: A Report Card alongside Jon Cryer and the Smothers Brothers. During the same year, she was also featured in the Bill Murray vehicle Scrooged, where she portrayed a contemporary version of the Ghost of Christmas Present, depicted in the film as a fairy. For this performance, Variety called her "unquestionably [the] pic's comic highlight". Additionally, she played a potential love interest for Steve Martin's character in the 1990 film My Blue Heaven.
Kane became a regular on the NBC series American Dreamer, which ran from 1990 to 1991. In 1993, she appeared in Addams Family Values where she replaced Judith Malina as Grandmama Addams; this role saw her reunite with her Taxi castmate Christopher Lloyd. She also guest starred on a 1994 episode of Seinfeld, as well as a 1996 episode of Ellen. In 1996, she was given a supporting role in the short-lived sitcom Pearl. From there, she continued to appear in a number of film roles throughout the 1990s and early 2000s, including The Pallbearer (1996), Office Killer (1997), Jawbreaker (1999), and My First Mister (2001). In 1998, she voiced Mother Duck in the American version of the animated television film The First Snow of Winter.
In 1999, she made a cameo in the Andy Kaufman biopic Man on the Moon as her Taxi character.
Kane is also known for her portrayal of the evil headmistress Madame Morrible in the Broadway musical Wicked, whom she played in various productions from 2005 to 2014. Kane made her Wicked debut on the 1st National Tour, playing the role from March 9 through December 19, 2005. She then reprised the role in the Broadway production from January 10 through November 12, 2006. She again played the role for the Los Angeles production which began performances on February 7, 2007. She left the production on December 30, 2007, and later returned on August 26, 2008, until the production closed on January 11, 2009.
In January 2009, she guest starred in the television series Two and a Half Men as the mother of Alan Harper's receptionist.
She then transferred with the Los Angeles company of Wicked to reprise her role once again, this time in the San Francisco production, which began performances January 27, 2009. She ended her limited engagement on March 22, 2009.
In March 2010, Kane appeared in the ABC series Ugly Betty as Justin Suarez's acting teacher.
Kane starred in the off-Broadway play Love, Loss, and What I Wore in February 2010. She made her West End debut in January 2011 in a major revival of Lillian Hellman's drama The Children's Hour at London's Comedy Theatre, where she starred alongside Keira Knightley, Elisabeth Moss and Ellen Burstyn. In May 2012, Kane appeared on Broadway as Betty Chumley in a revival of the play Harvey.
Kane returned to the Broadway company of Wicked from July 1, 2013, through February 22, 2014, a period that included the show's 10th anniversary.
In 2014, she was cast in a recurring role on the television series Gotham as Gertrude Kapelput, the mother of Oswald Cobblepot, also known as Penguin.
In 2015, Kane was cast in the recurring role of Lillian Kaushtupper, the landlord to the title character of the Netflix series Unbreakable Kimmy Schmidt. Kane joined the cast due in part to her admiration of showrunner Tina Fey, with whom she had previously wanted to collaborate on the NBC series 30 Rock. She was promoted to a series regular for the show's second season. Unbreakable Kimmy Schmidt ran for four seasons, making it one of Kane's longest television roles to date. She reprised the role in the "interactive" television special Kimmy vs the Reverend.
In 2018, Kane was cast in Jacques Audiard's Western film The Sisters Brothers. In 2019, she appeared in Jim Jarmusch's horror comedy The Dead Don't Die, marking another collaboration with Bill Murray. That same year, she was featured in the recurring role of Bianca Nova in season one of the HBO series Los Espookys, where she reunited with her Unbreakable Kimmy Schmidt castmate Fred Armisen.
In 2020, Kane was featured in the ensemble cast of the Amazon series Hunters, which also includes her longtime acquaintance Al Pacino. Additionally, during the same year, she participated in two cast reunion fundraisers, one with the cast of Taxi for the Actors Fund, the other with the cast of The Princess Bride for the Democratic Party of Wisconsin.
It was announced on Star Trek Day 2022 that Kane would join the cast of Star Trek: Strange New Worlds for season two as Chief Engineer Pelia. Prior to her casting, Kane had never seen an episode of the original Star Trek series, though she has said the show's writers thought this oversight improved her performance.
In 2023, Kane was announced as one of the leads in Nathan Silver's upcoming comedy film Between the Temples.
Kane was in a relationship with actor Woody Harrelson from 1986 to 1988. The two have remained friends since their break-up, and Harrelson was seen attending Kane's 60th birthday party in 2012.
She has never been married, nor has she had any children. Regarding the latter decision, she has said, "I never felt that I would be calm and stable enough to be the kind of mother I'd like to be. I don't think everyone randomly is mother material."
Kane is often noted for her high, breathy, slow voice, though her vocal timbre has grown raspier with age. Kane, who has often altered her voice to suit various roles, has confessed to disliking it, telling People magazine in 2020 that she wishes her voice was "deep and beautiful and sexy". | [
{
"paragraph_id": 0,
"text": "Carolyn Laurie Kane (born June 18, 1952) is an American actress. She gained recognition for her role in Hester Street (1975), for which she received an Academy Award nomination for Best Actress. She became known in the 1970s and 1980s in films such as Dog Day Afternoon (1975), Annie Hall (1977), The Princess Bride (1987), and Scrooged (1988).",
"title": ""
},
{
"paragraph_id": 1,
"text": "Kane appeared on the television series Taxi in the early 1980s, as Simka Gravas, the wife of Latka, the character played by Andy Kaufman, winning two Emmy Awards for her work. She has played the character of Madame Morrible in the musical Wicked, both in touring productions and on Broadway from 2005 to 2014. From 2015 to 2020, she was a main cast member on the Netflix series Unbreakable Kimmy Schmidt, in which she played Lillian Kaushtupper. She currently plays the recurring role of Pelia in Star Trek: Strange New Worlds (2023–present).",
"title": ""
},
{
"paragraph_id": 2,
"text": "Kane was born on June 18, 1952, in Cleveland, Ohio, the daughter of Joy, a jazz singer, teacher, dancer, and pianist, and architect Michael Kane. Her family is Jewish, and her grandparents emigrated from Russia, Austria, and Poland. Due to her father's occupation, Kane moved frequently as a child; she briefly lived in Paris at age 8, where she began learning to speak French. Additionally, she resided in Haiti at age 10, where she has recalled often feeling fearful due to extensive government surveillance under François \"Papa Doc\" Duvalier's rule. Her parents divorced when she was 12 years old.",
"title": "Early life"
},
{
"paragraph_id": 3,
"text": "She attended the Cherry Lawn School, a boarding school in Darien, Connecticut, until 1965. She studied theater at HB Studio and also went to the Professional Children's School in New York City. She became a member of both the Screen Actors Guild and the Actors' Equity Association at age 14. Kane made her professional theater debut in a 1966 production of The Prime of Miss Jean Brodie starring Tammy Grimes, her first job as a member of Actors' Equity.",
"title": "Early life"
},
{
"paragraph_id": 4,
"text": "Kane's on-screen career began while she was still a teenager, when she appeared in minor roles in films such as Desperate Characters and Mike Nichols's Carnal Knowledge in 1971, the latter of which led her to befriend lead actor Jack Nicholson. In 1972, she was cast in her first leading role in the Canadian production Wedding in White, where she played a teenage rape victim who is forced into marriage by her father. She also appeared as a sex worker in Hal Ashby's 1973 film The Last Detail, where she collaborated with Nicholson yet again.",
"title": "Career"
},
{
"paragraph_id": 5,
"text": "In 1975, Kane was cast in Joan Micklin Silver's feature-length debut Hester Street, in which she played a Russian-Jewish immigrant who struggles with her husband to assimilate in early 20th-century New York. For her performance in the film, Kane garnered her sole Academy Award nomination for Best Actress at the 48th Academy Awards, and it remains her favorite of all her roles. Additionally, 1975 saw her appear as a bank teller in Sidney Lumet's crime drama Dog Day Afternoon, which received numerous Academy Award nominations in other categories that same year. This also marked her first on-screen collaboration with Al Pacino, whom she had known prior to the film thanks to their shared background in theater.",
"title": "Career"
},
{
"paragraph_id": 6,
"text": "Despite this recognition, however, Kane has recounted waiting for approximately a year before being cast in her next role, which she has attributed to the trend of actors being typecast after receiving awards attention. Her return to the screen would come with Gene Wilder's 1977 comedy The World's Greatest Lover, which she has credited for identifying the comedic talents that would become her staple in later years. During the same year, she was cast in Woody Allen's romantic comedy Annie Hall, where she played Allison Portchnik, the first wife of Allen's character Alvy Singer. She also appeared in Ken Russell's film Valentino, which, like The World's Greatest Lover, takes inspiration from the silent film era, as it is a biographical drama loosely inspired by the life of Rudolph Valentino.",
"title": "Career"
},
{
"paragraph_id": 7,
"text": "After this, Kane appeared in the horror films The Mafu Cage (1978) and When a Stranger Calls (1979); ironically, Kane herself is largely averse to horror, and she admits to being unable to watch the latter. In 1979, she also appeared in a cameo role in The Muppet Movie.",
"title": "Career"
},
{
"paragraph_id": 8,
"text": "From 1980 to 1983, Kane portrayed Simka Dahblitz-Gravas, the wife of Andy Kaufman's character Latka Gravas, on the American television series Taxi. Kane has attributed the on-screen rapport she shared with Kaufman to their different work ethics: where she was trained in the theater and enjoyed rehearsal time, Kaufman was rooted more in stand-up comedy and did not care for rehearsals, a contrast that she believes enhanced their believability as a married couple. However, she maintains that she and Kaufman had a loving relationship on set, and she has spoken fondly of him in retrospective interviews. Kane received two Emmy Awards for her work on Taxi. Her role on the series has largely been credited as the beginning of her pivot towards more comedic roles, as she began to regularly appear in sitcoms and comedy films after the series ended.",
"title": "Career"
},
{
"paragraph_id": 9,
"text": "In 1984, Kane appeared in episode 12, season 3 of Cheers as Amanda, an acquaintance of Diane Chambers from her time spent in a mental institution. She was also a regular on the 1986 series All Is Forgiven.",
"title": "Career"
},
{
"paragraph_id": 10,
"text": "In 1987, Kane appeared in Ishtar, Elaine May's notorious box-office flop turned cult classic, playing the frustrated girlfriend of Dustin Hoffman's character. That year also saw her make one of her most recognizable film appearances in Rob Reiner's fantasy romance The Princess Bride, where she played a witch opposite Billy Crystal. In 1988, Kane appeared in the Cinemax Comedy Experiment Rap Master Ronnie: A Report Card alongside Jon Cryer and the Smothers Brothers. During the same year, she was also featured in the Bill Murray vehicle Scrooged, where she portrayed a contemporary version of the Ghost of Christmas Present, depicted in the film as a fairy. For this performance, Variety called her \"unquestionably [the] pic's comic highlight\". Additionally, she played a potential love interest for Steve Martin's character in the 1990 film My Blue Heaven.",
"title": "Career"
},
{
"paragraph_id": 11,
"text": "Kane became a regular on the NBC series American Dreamer, which ran from 1990 to 1991. In 1993, she appeared in Addams Family Values where she replaced Judith Malina as Grandmama Addams; this role saw her reunite with her Taxi castmate Christopher Lloyd. She also guest starred on a 1994 episode of Seinfeld, as well as a 1996 episode of Ellen. In 1996, she was given a supporting role in the short-lived sitcom Pearl. From there, she continued to appear in a number of film roles throughout the 1990s and early 2000s, including The Pallbearer (1996), Office Killer (1997), Jawbreaker (1999), and My First Mister (2001). In 1998, she voiced Mother Duck in the American version of the animated television film The First Snow of Winter.",
"title": "Career"
},
{
"paragraph_id": 12,
"text": "In 1999, she made a cameo in the Andy Kaufman biopic Man on the Moon as her Taxi character.",
"title": "Career"
},
{
"paragraph_id": 13,
"text": "Kane is also known for her portrayal of the evil headmistress Madame Morrible in the Broadway musical Wicked, whom she played in various productions from 2005 to 2014. Kane made her Wicked debut on the 1st National Tour, playing the role from March 9 through December 19, 2005. She then reprised the role in the Broadway production from January 10 through November 12, 2006. She again played the role for the Los Angeles production which began performances on February 7, 2007. She left the production on December 30, 2007, and later returned on August 26, 2008, until the production closed on January 11, 2009.",
"title": "Career"
},
{
"paragraph_id": 14,
"text": "In January 2009, she guest starred in the television series Two and a Half Men as the mother of Alan Harper's receptionist.",
"title": "Career"
},
{
"paragraph_id": 15,
"text": "She then transferred with the Los Angeles company of Wicked to reprise her role once again, this time in the San Francisco production, which began performances January 27, 2009. She ended her limited engagement on March 22, 2009.",
"title": "Career"
},
{
"paragraph_id": 16,
"text": "In March 2010, Kane appeared in the ABC series Ugly Betty as Justin Suarez's acting teacher.",
"title": "Career"
},
{
"paragraph_id": 17,
"text": "Kane starred in the off-Broadway play Love, Loss, and What I Wore in February 2010. She made her West End debut in January 2011 in a major revival of Lillian Hellman's drama The Children's Hour at London's Comedy Theatre, where she starred alongside Keira Knightley, Elisabeth Moss and Ellen Burstyn. In May 2012, Kane appeared on Broadway as Betty Chumley in a revival of the play Harvey.",
"title": "Career"
},
{
"paragraph_id": 18,
"text": "Kane returned to the Broadway company of Wicked from July 1, 2013, through February 22, 2014, a period that included the show's 10th anniversary.",
"title": "Career"
},
{
"paragraph_id": 19,
"text": "In 2014, she was cast in a recurring role on the television series Gotham as Gertrude Kapelput, the mother of Oswald Cobblepot, also known as Penguin.",
"title": "Career"
},
{
"paragraph_id": 20,
"text": "In 2015, Kane was cast in the recurring role of Lillian Kaushtupper, the landlord to the title character of the Netflix series Unbreakable Kimmy Schmidt. Kane joined the cast due in part to her admiration of showrunner Tina Fey, with whom she had previously wanted to collaborate on the NBC series 30 Rock. She was promoted to a series regular for the show's second season. Unbreakable Kimmy Schmidt ran for four seasons, making it one of Kane's longest television roles to date. She reprised the role in the \"interactive\" television special Kimmy vs the Reverend.",
"title": "Career"
},
{
"paragraph_id": 21,
"text": "In 2018, Kane was cast in Jacques Audiard's Western film The Sisters Brothers. In 2019, she appeared in Jim Jarmusch's horror comedy The Dead Don't Die, marking another collaboration with Bill Murray. That same year, she was featured in the recurring role of Bianca Nova in season one of the HBO series Los Espookys, where she reunited with her Unbreakable Kimmy Schmidt castmate Fred Armisen.",
"title": "Career"
},
{
"paragraph_id": 22,
"text": "In 2020, Kane was featured in the ensemble cast of the Amazon series Hunters, which also includes her longtime acquaintance Al Pacino. Additionally, during the same year, she participated in two cast reunion fundraisers, one with the cast of Taxi for the Actors Fund, the other with the cast of The Princess Bride for the Democratic Party of Wisconsin.",
"title": "Career"
},
{
"paragraph_id": 23,
"text": "It was announced on Star Trek Day 2022 that Kane would join the cast of Star Trek: Strange New Worlds for season two as Chief Engineer Pelia. Prior to her casting, Kane had never seen an episode of the original Star Trek series, though she has said the show's writers thought this oversight improved her performance.",
"title": "Career"
},
{
"paragraph_id": 24,
"text": "In 2023, Kane was announced as one of the leads in Nathan Silver's upcoming comedy film Between the Temples.",
"title": "Career"
},
{
"paragraph_id": 25,
"text": "Kane was in a relationship with actor Woody Harrelson from 1986 to 1988. The two have remained friends since their break-up, and Harrelson was seen attending Kane's 60th birthday party in 2012.",
"title": "Personal life"
},
{
"paragraph_id": 26,
"text": "She has never been married, nor has she had any children. Regarding the latter decision, she has said, \"I never felt that I would be calm and stable enough to be the kind of mother I'd like to be. I don't think everyone randomly is mother material.\"",
"title": "Personal life"
},
{
"paragraph_id": 27,
"text": "Kane is often noted for her high, breathy, slow voice, though her vocal timbre has grown raspier with age. Kane, who has often altered her voice to suit various roles, has confessed to disliking it, telling People magazine in 2020 that she wishes her voice was \"deep and beautiful and sexy\".",
"title": "Personal life"
}
] | Carolyn Laurie Kane is an American actress. She gained recognition for her role in Hester Street (1975), for which she received an Academy Award nomination for Best Actress. She became known in the 1970s and 1980s in films such as Dog Day Afternoon (1975), Annie Hall (1977), The Princess Bride (1987), and Scrooged (1988). Kane appeared on the television series Taxi in the early 1980s, as Simka Gravas, the wife of Latka, the character played by Andy Kaufman, winning two Emmy Awards for her work. She has played the character of Madame Morrible in the musical Wicked, both in touring productions and on Broadway from 2005 to 2014. From 2015 to 2020, she was a main cast member on the Netflix series Unbreakable Kimmy Schmidt, in which she played Lillian Kaushtupper. She currently plays the recurring role of Pelia in Star Trek: Strange New Worlds (2023–present). | 2001-11-20T00:29:11Z | 2023-12-20T18:05:14Z | [
"Template:Cite book",
"Template:Iobdb name",
"Template:For",
"Template:Lang",
"Template:Efn",
"Template:Reflist",
"Template:Cite web",
"Template:Cite AV media",
"Template:IBDB name",
"Template:Authority control",
"Template:Use American English",
"Template:Infobox person",
"Template:Ndash",
"Template:Nom",
"Template:Cite news",
"Template:Webarchive",
"Template:Short description",
"Template:Use mdy dates",
"Template:Notelist",
"Template:IMDb name",
"Template:Won",
"Template:Citation",
"Template:Navboxes"
] | https://en.wikipedia.org/wiki/Carol_Kane |
7,184 | C*-algebra | In mathematics, specifically in functional analysis, a C-algebra (pronounced "C-star") is a Banach algebra together with an involution satisfying the properties of the adjoint. A particular case is that of a complex algebra A of continuous linear operators on a complex Hilbert space with two additional properties:
Another important class of non-Hilbert C*-algebras includes the algebra C 0 ( X ) {\displaystyle C_{0}(X)} of complex-valued continuous functions on X that vanish at infinity, where X is a locally compact Hausdorff space.
C*-algebras were first considered primarily for their use in quantum mechanics to model algebras of physical observables. This line of research began with Werner Heisenberg's matrix mechanics and in a more mathematically developed form with Pascual Jordan around 1933. Subsequently, John von Neumann attempted to establish a general framework for these algebras, which culminated in a series of papers on rings of operators. These papers considered a special class of C*-algebras which are now known as von Neumann algebras.
Around 1943, the work of Israel Gelfand and Mark Naimark yielded an abstract characterisation of C*-algebras making no reference to operators on a Hilbert space.
C*-algebras are now an important tool in the theory of unitary representations of locally compact groups, and are also used in algebraic formulations of quantum mechanics. Another active area of research is the program to obtain classification, or to determine the extent of which classification is possible, for separable simple nuclear C*-algebras.
We begin with the abstract characterization of C*-algebras given in the 1943 paper by Gelfand and Naimark.
A C*-algebra, A, is a Banach algebra over the field of complex numbers, together with a map x ↦ x ∗ {\textstyle x\mapsto x^{*}} for x ∈ A {\textstyle x\in A} with the following properties:
Remark. The first four identities say that A is a *-algebra. The last identity is called the C* identity and is equivalent to:
‖ x x ∗ ‖ = ‖ x ‖ 2 , {\displaystyle \|xx^{*}\|=\|x\|^{2},}
which is sometimes called the B*-identity. For history behind the names C*- and B*-algebras, see the history section below.
The C*-identity is a very strong requirement. For instance, together with the spectral radius formula, it implies that the C*-norm is uniquely determined by the algebraic structure:
A bounded linear map, π : A → B, between C*-algebras A and B is called a *-homomorphism if
In the case of C*-algebras, any *-homomorphism π between C*-algebras is contractive, i.e. bounded with norm ≤ 1. Furthermore, an injective *-homomorphism between C*-algebras is isometric. These are consequences of the C*-identity.
A bijective *-homomorphism π is called a C*-isomorphism, in which case A and B are said to be isomorphic.
The term B*-algebra was introduced by C. E. Rickart in 1946 to describe Banach *-algebras that satisfy the condition:
This condition automatically implies that the *-involution is isometric, that is, ‖ x ‖ = ‖ x ∗ ‖ {\displaystyle \lVert x\rVert =\lVert x^{*}\rVert } . Hence, ‖ x x ∗ ‖ = ‖ x ‖ ‖ x ∗ ‖ {\displaystyle \lVert xx^{*}\rVert =\lVert x\rVert \lVert x^{*}\rVert } , and therefore, a B*-algebra is also a C*-algebra. Conversely, the C*-condition implies the B*-condition. This is nontrivial, and can be proved without using the condition ‖ x ‖ = ‖ x ∗ ‖ {\displaystyle \lVert x\rVert =\lVert x^{*}\rVert } . For these reasons, the term B*-algebra is rarely used in current terminology, and has been replaced by the term 'C*-algebra'.
The term C*-algebra was introduced by I. E. Segal in 1947 to describe norm-closed subalgebras of B(H), namely, the space of bounded operators on some Hilbert space H. 'C' stood for 'closed'. In his paper Segal defines a C*-algebra as a "uniformly closed, self-adjoint algebra of bounded operators on a Hilbert space".
C*-algebras have a large number of properties that are technically convenient. Some of these properties can be established by using the continuous functional calculus or by reduction to commutative C*-algebras. In the latter case, we can use the fact that the structure of these is completely determined by the Gelfand isomorphism.
Self-adjoint elements are those of the form x = x ∗ {\displaystyle x=x^{*}} . The set of elements of a C*-algebra A of the form x ∗ x {\displaystyle x^{*}x} forms a closed convex cone. This cone is identical to the elements of the form x x ∗ {\displaystyle xx^{*}} . Elements of this cone are called non-negative (or sometimes positive, even though this terminology conflicts with its use for elements of ℝ)
The set of self-adjoint elements of a C*-algebra A naturally has the structure of a partially ordered vector space; the ordering is usually denoted ≥ {\displaystyle \geq } . In this ordering, a self-adjoint element x ∈ A {\displaystyle x\in A} satisfies x ≥ 0 {\displaystyle x\geq 0} if and only if the spectrum of x {\displaystyle x} is non-negative, if and only if x = s ∗ s {\displaystyle x=s^{*}s} for some s ∈ A {\displaystyle s\in A} . Two self-adjoint elements x {\displaystyle x} and y {\displaystyle y} of A satisfy x ≥ y {\displaystyle x\geq y} if x − y ≥ 0 {\displaystyle x-y\geq 0} .
This partially ordered subspace allows the definition of a positive linear functional on a C*-algebra, which in turn is used to define the states of a C*-algebra, which in turn can be used to construct the spectrum of a C*-algebra using the GNS construction.
Any C*-algebra A has an approximate identity. In fact, there is a directed family {eλ}λ∈I of self-adjoint elements of A such that
Using approximate identities, one can show that the algebraic quotient of a C*-algebra by a closed proper two-sided ideal, with the natural norm, is a C*-algebra.
Similarly, a closed two-sided ideal of a C*-algebra is itself a C*-algebra.
The algebra M(n, C) of n × n matrices over C becomes a C*-algebra if we consider matrices as operators on the Euclidean space, C, and use the operator norm ||·|| on matrices. The involution is given by the conjugate transpose. More generally, one can consider finite direct sums of matrix algebras. In fact, all C*-algebras that are finite dimensional as vector spaces are of this form, up to isomorphism. The self-adjoint requirement means finite-dimensional C*-algebras are semisimple, from which fact one can deduce the following theorem of Artin–Wedderburn type:
Theorem. A finite-dimensional C*-algebra, A, is canonically isomorphic to a finite direct sum
where min A is the set of minimal nonzero self-adjoint central projections of A.
Each C*-algebra, Ae, is isomorphic (in a noncanonical way) to the full matrix algebra M(dim(e), C). The finite family indexed on min A given by {dim(e)}e is called the dimension vector of A. This vector uniquely determines the isomorphism class of a finite-dimensional C*-algebra. In the language of K-theory, this vector is the positive cone of the K0 group of A.
A †-algebra (or, more explicitly, a †-closed algebra) is the name occasionally used in physics for a finite-dimensional C*-algebra. The dagger, †, is used in the name because physicists typically use the symbol to denote a Hermitian adjoint, and are often not worried about the subtleties associated with an infinite number of dimensions. (Mathematicians usually use the asterisk, *, to denote the Hermitian adjoint.) †-algebras feature prominently in quantum mechanics, and especially quantum information science.
An immediate generalization of finite dimensional C*-algebras are the approximately finite dimensional C*-algebras.
The prototypical example of a C*-algebra is the algebra B(H) of bounded (equivalently continuous) linear operators defined on a complex Hilbert space H; here x* denotes the adjoint operator of the operator x : H → H. In fact, every C*-algebra, A, is *-isomorphic to a norm-closed adjoint closed subalgebra of B(H) for a suitable Hilbert space, H; this is the content of the Gelfand–Naimark theorem.
Let H be a separable infinite-dimensional Hilbert space. The algebra K(H) of compact operators on H is a norm closed subalgebra of B(H). It is also closed under involution; hence it is a C*-algebra.
Concrete C*-algebras of compact operators admit a characterization similar to Wedderburn's theorem for finite dimensional C*-algebras:
Theorem. If A is a C*-subalgebra of K(H), then there exists Hilbert spaces {Hi}i∈I such that
where the (C*-)direct sum consists of elements (Ti) of the Cartesian product Π K(Hi) with ||Ti|| → 0.
Though K(H) does not have an identity element, a sequential approximate identity for K(H) can be developed. To be specific, H is isomorphic to the space of square summable sequences l; we may assume that H = l. For each natural number n let Hn be the subspace of sequences of l which vanish for indices k ≥ n and let en be the orthogonal projection onto Hn. The sequence {en}n is an approximate identity for K(H).
K(H) is a two-sided closed ideal of B(H). For separable Hilbert spaces, it is the unique ideal. The quotient of B(H) by K(H) is the Calkin algebra.
Let X be a locally compact Hausdorff space. The space C 0 ( X ) {\displaystyle C_{0}(X)} of complex-valued continuous functions on X that vanish at infinity (defined in the article on local compactness) form a commutative C*-algebra C 0 ( X ) {\displaystyle C_{0}(X)} under pointwise multiplication and addition. The involution is pointwise conjugation. C 0 ( X ) {\displaystyle C_{0}(X)} has a multiplicative unit element if and only if X {\displaystyle X} is compact. As does any C*-algebra, C 0 ( X ) {\displaystyle C_{0}(X)} has an approximate identity. In the case of C 0 ( X ) {\displaystyle C_{0}(X)} this is immediate: consider the directed set of compact subsets of X {\displaystyle X} , and for each compact K {\displaystyle K} let f K {\displaystyle f_{K}} be a function of compact support which is identically 1 on K {\displaystyle K} . Such functions exist by the Tietze extension theorem, which applies to locally compact Hausdorff spaces. Any such sequence of functions { f K } {\displaystyle \{f_{K}\}} is an approximate identity.
The Gelfand representation states that every commutative C*-algebra is *-isomorphic to the algebra C 0 ( X ) {\displaystyle C_{0}(X)} , where X {\displaystyle X} is the space of characters equipped with the weak* topology. Furthermore, if C 0 ( X ) {\displaystyle C_{0}(X)} is isomorphic to C 0 ( Y ) {\displaystyle C_{0}(Y)} as C*-algebras, it follows that X {\displaystyle X} and Y {\displaystyle Y} are homeomorphic. This characterization is one of the motivations for the noncommutative topology and noncommutative geometry programs.
Given a Banach *-algebra A with an approximate identity, there is a unique (up to C*-isomorphism) C*-algebra E(A) and *-morphism π from A into E(A) that is universal, that is, every other continuous *-morphism π ' : A → B factors uniquely through π. The algebra E(A) is called the C*-enveloping algebra of the Banach *-algebra A.
Of particular importance is the C*-algebra of a locally compact group G. This is defined as the enveloping C*-algebra of the group algebra of G. The C*-algebra of G provides context for general harmonic analysis of G in the case G is non-abelian. In particular, the dual of a locally compact group is defined to be the primitive ideal space of the group C*-algebra. See spectrum of a C*-algebra.
Von Neumann algebras, known as W* algebras before the 1960s, are a special kind of C*-algebra. They are required to be closed in the weak operator topology, which is weaker than the norm topology.
The Sherman–Takeda theorem implies that any C*-algebra has a universal enveloping W*-algebra, such that any homomorphism to a W*-algebra factors through it.
A C*-algebra A is of type I if and only if for all non-degenerate representations π of A the von Neumann algebra π(A)′′ (that is, the bicommutant of π(A)) is a type I von Neumann algebra. In fact it is sufficient to consider only factor representations, i.e. representations π for which π(A)′′ is a factor.
A locally compact group is said to be of type I if and only if its group C*-algebra is type I.
However, if a C*-algebra has non-type I representations, then by results of James Glimm it also has representations of type II and type III. Thus for C*-algebras and locally compact groups, it is only meaningful to speak of type I and non type I properties.
In quantum mechanics, one typically describes a physical system with a C*-algebra A with unit element; the self-adjoint elements of A (elements x with x* = x) are thought of as the observables, the measurable quantities, of the system. A state of the system is defined as a positive functional on A (a C-linear map φ : A → C with φ(u*u) ≥ 0 for all u ∈ A) such that φ(1) = 1. The expected value of the observable x, if the system is in state φ, is then φ(x).
This C*-algebra approach is used in the Haag-Kastler axiomatization of local quantum field theory, where every open set of Minkowski spacetime is associated with a C*-algebra. | [
{
"paragraph_id": 0,
"text": "In mathematics, specifically in functional analysis, a C-algebra (pronounced \"C-star\") is a Banach algebra together with an involution satisfying the properties of the adjoint. A particular case is that of a complex algebra A of continuous linear operators on a complex Hilbert space with two additional properties:",
"title": ""
},
{
"paragraph_id": 1,
"text": "Another important class of non-Hilbert C*-algebras includes the algebra C 0 ( X ) {\\displaystyle C_{0}(X)} of complex-valued continuous functions on X that vanish at infinity, where X is a locally compact Hausdorff space.",
"title": ""
},
{
"paragraph_id": 2,
"text": "C*-algebras were first considered primarily for their use in quantum mechanics to model algebras of physical observables. This line of research began with Werner Heisenberg's matrix mechanics and in a more mathematically developed form with Pascual Jordan around 1933. Subsequently, John von Neumann attempted to establish a general framework for these algebras, which culminated in a series of papers on rings of operators. These papers considered a special class of C*-algebras which are now known as von Neumann algebras.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Around 1943, the work of Israel Gelfand and Mark Naimark yielded an abstract characterisation of C*-algebras making no reference to operators on a Hilbert space.",
"title": ""
},
{
"paragraph_id": 4,
"text": "C*-algebras are now an important tool in the theory of unitary representations of locally compact groups, and are also used in algebraic formulations of quantum mechanics. Another active area of research is the program to obtain classification, or to determine the extent of which classification is possible, for separable simple nuclear C*-algebras.",
"title": ""
},
{
"paragraph_id": 5,
"text": "We begin with the abstract characterization of C*-algebras given in the 1943 paper by Gelfand and Naimark.",
"title": "Abstract characterization"
},
{
"paragraph_id": 6,
"text": "A C*-algebra, A, is a Banach algebra over the field of complex numbers, together with a map x ↦ x ∗ {\\textstyle x\\mapsto x^{*}} for x ∈ A {\\textstyle x\\in A} with the following properties:",
"title": "Abstract characterization"
},
{
"paragraph_id": 7,
"text": "Remark. The first four identities say that A is a *-algebra. The last identity is called the C* identity and is equivalent to:",
"title": "Abstract characterization"
},
{
"paragraph_id": 8,
"text": "‖ x x ∗ ‖ = ‖ x ‖ 2 , {\\displaystyle \\|xx^{*}\\|=\\|x\\|^{2},}",
"title": "Abstract characterization"
},
{
"paragraph_id": 9,
"text": "which is sometimes called the B*-identity. For history behind the names C*- and B*-algebras, see the history section below.",
"title": "Abstract characterization"
},
{
"paragraph_id": 10,
"text": "The C*-identity is a very strong requirement. For instance, together with the spectral radius formula, it implies that the C*-norm is uniquely determined by the algebraic structure:",
"title": "Abstract characterization"
},
{
"paragraph_id": 11,
"text": "A bounded linear map, π : A → B, between C*-algebras A and B is called a *-homomorphism if",
"title": "Abstract characterization"
},
{
"paragraph_id": 12,
"text": "In the case of C*-algebras, any *-homomorphism π between C*-algebras is contractive, i.e. bounded with norm ≤ 1. Furthermore, an injective *-homomorphism between C*-algebras is isometric. These are consequences of the C*-identity.",
"title": "Abstract characterization"
},
{
"paragraph_id": 13,
"text": "A bijective *-homomorphism π is called a C*-isomorphism, in which case A and B are said to be isomorphic.",
"title": "Abstract characterization"
},
{
"paragraph_id": 14,
"text": "The term B*-algebra was introduced by C. E. Rickart in 1946 to describe Banach *-algebras that satisfy the condition:",
"title": "Some history: B*-algebras and C*-algebras"
},
{
"paragraph_id": 15,
"text": "This condition automatically implies that the *-involution is isometric, that is, ‖ x ‖ = ‖ x ∗ ‖ {\\displaystyle \\lVert x\\rVert =\\lVert x^{*}\\rVert } . Hence, ‖ x x ∗ ‖ = ‖ x ‖ ‖ x ∗ ‖ {\\displaystyle \\lVert xx^{*}\\rVert =\\lVert x\\rVert \\lVert x^{*}\\rVert } , and therefore, a B*-algebra is also a C*-algebra. Conversely, the C*-condition implies the B*-condition. This is nontrivial, and can be proved without using the condition ‖ x ‖ = ‖ x ∗ ‖ {\\displaystyle \\lVert x\\rVert =\\lVert x^{*}\\rVert } . For these reasons, the term B*-algebra is rarely used in current terminology, and has been replaced by the term 'C*-algebra'.",
"title": "Some history: B*-algebras and C*-algebras"
},
{
"paragraph_id": 16,
"text": "The term C*-algebra was introduced by I. E. Segal in 1947 to describe norm-closed subalgebras of B(H), namely, the space of bounded operators on some Hilbert space H. 'C' stood for 'closed'. In his paper Segal defines a C*-algebra as a \"uniformly closed, self-adjoint algebra of bounded operators on a Hilbert space\".",
"title": "Some history: B*-algebras and C*-algebras"
},
{
"paragraph_id": 17,
"text": "C*-algebras have a large number of properties that are technically convenient. Some of these properties can be established by using the continuous functional calculus or by reduction to commutative C*-algebras. In the latter case, we can use the fact that the structure of these is completely determined by the Gelfand isomorphism.",
"title": "Structure of C*-algebras"
},
{
"paragraph_id": 18,
"text": "Self-adjoint elements are those of the form x = x ∗ {\\displaystyle x=x^{*}} . The set of elements of a C*-algebra A of the form x ∗ x {\\displaystyle x^{*}x} forms a closed convex cone. This cone is identical to the elements of the form x x ∗ {\\displaystyle xx^{*}} . Elements of this cone are called non-negative (or sometimes positive, even though this terminology conflicts with its use for elements of ℝ)",
"title": "Structure of C*-algebras"
},
{
"paragraph_id": 19,
"text": "The set of self-adjoint elements of a C*-algebra A naturally has the structure of a partially ordered vector space; the ordering is usually denoted ≥ {\\displaystyle \\geq } . In this ordering, a self-adjoint element x ∈ A {\\displaystyle x\\in A} satisfies x ≥ 0 {\\displaystyle x\\geq 0} if and only if the spectrum of x {\\displaystyle x} is non-negative, if and only if x = s ∗ s {\\displaystyle x=s^{*}s} for some s ∈ A {\\displaystyle s\\in A} . Two self-adjoint elements x {\\displaystyle x} and y {\\displaystyle y} of A satisfy x ≥ y {\\displaystyle x\\geq y} if x − y ≥ 0 {\\displaystyle x-y\\geq 0} .",
"title": "Structure of C*-algebras"
},
{
"paragraph_id": 20,
"text": "This partially ordered subspace allows the definition of a positive linear functional on a C*-algebra, which in turn is used to define the states of a C*-algebra, which in turn can be used to construct the spectrum of a C*-algebra using the GNS construction.",
"title": "Structure of C*-algebras"
},
{
"paragraph_id": 21,
"text": "Any C*-algebra A has an approximate identity. In fact, there is a directed family {eλ}λ∈I of self-adjoint elements of A such that",
"title": "Structure of C*-algebras"
},
{
"paragraph_id": 22,
"text": "Using approximate identities, one can show that the algebraic quotient of a C*-algebra by a closed proper two-sided ideal, with the natural norm, is a C*-algebra.",
"title": "Structure of C*-algebras"
},
{
"paragraph_id": 23,
"text": "Similarly, a closed two-sided ideal of a C*-algebra is itself a C*-algebra.",
"title": "Structure of C*-algebras"
},
{
"paragraph_id": 24,
"text": "The algebra M(n, C) of n × n matrices over C becomes a C*-algebra if we consider matrices as operators on the Euclidean space, C, and use the operator norm ||·|| on matrices. The involution is given by the conjugate transpose. More generally, one can consider finite direct sums of matrix algebras. In fact, all C*-algebras that are finite dimensional as vector spaces are of this form, up to isomorphism. The self-adjoint requirement means finite-dimensional C*-algebras are semisimple, from which fact one can deduce the following theorem of Artin–Wedderburn type:",
"title": "Examples"
},
{
"paragraph_id": 25,
"text": "Theorem. A finite-dimensional C*-algebra, A, is canonically isomorphic to a finite direct sum",
"title": "Examples"
},
{
"paragraph_id": 26,
"text": "where min A is the set of minimal nonzero self-adjoint central projections of A.",
"title": "Examples"
},
{
"paragraph_id": 27,
"text": "Each C*-algebra, Ae, is isomorphic (in a noncanonical way) to the full matrix algebra M(dim(e), C). The finite family indexed on min A given by {dim(e)}e is called the dimension vector of A. This vector uniquely determines the isomorphism class of a finite-dimensional C*-algebra. In the language of K-theory, this vector is the positive cone of the K0 group of A.",
"title": "Examples"
},
{
"paragraph_id": 28,
"text": "A †-algebra (or, more explicitly, a †-closed algebra) is the name occasionally used in physics for a finite-dimensional C*-algebra. The dagger, †, is used in the name because physicists typically use the symbol to denote a Hermitian adjoint, and are often not worried about the subtleties associated with an infinite number of dimensions. (Mathematicians usually use the asterisk, *, to denote the Hermitian adjoint.) †-algebras feature prominently in quantum mechanics, and especially quantum information science.",
"title": "Examples"
},
{
"paragraph_id": 29,
"text": "An immediate generalization of finite dimensional C*-algebras are the approximately finite dimensional C*-algebras.",
"title": "Examples"
},
{
"paragraph_id": 30,
"text": "The prototypical example of a C*-algebra is the algebra B(H) of bounded (equivalently continuous) linear operators defined on a complex Hilbert space H; here x* denotes the adjoint operator of the operator x : H → H. In fact, every C*-algebra, A, is *-isomorphic to a norm-closed adjoint closed subalgebra of B(H) for a suitable Hilbert space, H; this is the content of the Gelfand–Naimark theorem.",
"title": "Examples"
},
{
"paragraph_id": 31,
"text": "Let H be a separable infinite-dimensional Hilbert space. The algebra K(H) of compact operators on H is a norm closed subalgebra of B(H). It is also closed under involution; hence it is a C*-algebra.",
"title": "Examples"
},
{
"paragraph_id": 32,
"text": "Concrete C*-algebras of compact operators admit a characterization similar to Wedderburn's theorem for finite dimensional C*-algebras:",
"title": "Examples"
},
{
"paragraph_id": 33,
"text": "Theorem. If A is a C*-subalgebra of K(H), then there exists Hilbert spaces {Hi}i∈I such that",
"title": "Examples"
},
{
"paragraph_id": 34,
"text": "where the (C*-)direct sum consists of elements (Ti) of the Cartesian product Π K(Hi) with ||Ti|| → 0.",
"title": "Examples"
},
{
"paragraph_id": 35,
"text": "Though K(H) does not have an identity element, a sequential approximate identity for K(H) can be developed. To be specific, H is isomorphic to the space of square summable sequences l; we may assume that H = l. For each natural number n let Hn be the subspace of sequences of l which vanish for indices k ≥ n and let en be the orthogonal projection onto Hn. The sequence {en}n is an approximate identity for K(H).",
"title": "Examples"
},
{
"paragraph_id": 36,
"text": "K(H) is a two-sided closed ideal of B(H). For separable Hilbert spaces, it is the unique ideal. The quotient of B(H) by K(H) is the Calkin algebra.",
"title": "Examples"
},
{
"paragraph_id": 37,
"text": "Let X be a locally compact Hausdorff space. The space C 0 ( X ) {\\displaystyle C_{0}(X)} of complex-valued continuous functions on X that vanish at infinity (defined in the article on local compactness) form a commutative C*-algebra C 0 ( X ) {\\displaystyle C_{0}(X)} under pointwise multiplication and addition. The involution is pointwise conjugation. C 0 ( X ) {\\displaystyle C_{0}(X)} has a multiplicative unit element if and only if X {\\displaystyle X} is compact. As does any C*-algebra, C 0 ( X ) {\\displaystyle C_{0}(X)} has an approximate identity. In the case of C 0 ( X ) {\\displaystyle C_{0}(X)} this is immediate: consider the directed set of compact subsets of X {\\displaystyle X} , and for each compact K {\\displaystyle K} let f K {\\displaystyle f_{K}} be a function of compact support which is identically 1 on K {\\displaystyle K} . Such functions exist by the Tietze extension theorem, which applies to locally compact Hausdorff spaces. Any such sequence of functions { f K } {\\displaystyle \\{f_{K}\\}} is an approximate identity.",
"title": "Examples"
},
{
"paragraph_id": 38,
"text": "The Gelfand representation states that every commutative C*-algebra is *-isomorphic to the algebra C 0 ( X ) {\\displaystyle C_{0}(X)} , where X {\\displaystyle X} is the space of characters equipped with the weak* topology. Furthermore, if C 0 ( X ) {\\displaystyle C_{0}(X)} is isomorphic to C 0 ( Y ) {\\displaystyle C_{0}(Y)} as C*-algebras, it follows that X {\\displaystyle X} and Y {\\displaystyle Y} are homeomorphic. This characterization is one of the motivations for the noncommutative topology and noncommutative geometry programs.",
"title": "Examples"
},
{
"paragraph_id": 39,
"text": "Given a Banach *-algebra A with an approximate identity, there is a unique (up to C*-isomorphism) C*-algebra E(A) and *-morphism π from A into E(A) that is universal, that is, every other continuous *-morphism π ' : A → B factors uniquely through π. The algebra E(A) is called the C*-enveloping algebra of the Banach *-algebra A.",
"title": "Examples"
},
{
"paragraph_id": 40,
"text": "Of particular importance is the C*-algebra of a locally compact group G. This is defined as the enveloping C*-algebra of the group algebra of G. The C*-algebra of G provides context for general harmonic analysis of G in the case G is non-abelian. In particular, the dual of a locally compact group is defined to be the primitive ideal space of the group C*-algebra. See spectrum of a C*-algebra.",
"title": "Examples"
},
{
"paragraph_id": 41,
"text": "Von Neumann algebras, known as W* algebras before the 1960s, are a special kind of C*-algebra. They are required to be closed in the weak operator topology, which is weaker than the norm topology.",
"title": "Examples"
},
{
"paragraph_id": 42,
"text": "The Sherman–Takeda theorem implies that any C*-algebra has a universal enveloping W*-algebra, such that any homomorphism to a W*-algebra factors through it.",
"title": "Examples"
},
{
"paragraph_id": 43,
"text": "A C*-algebra A is of type I if and only if for all non-degenerate representations π of A the von Neumann algebra π(A)′′ (that is, the bicommutant of π(A)) is a type I von Neumann algebra. In fact it is sufficient to consider only factor representations, i.e. representations π for which π(A)′′ is a factor.",
"title": "Type for C*-algebras"
},
{
"paragraph_id": 44,
"text": "A locally compact group is said to be of type I if and only if its group C*-algebra is type I.",
"title": "Type for C*-algebras"
},
{
"paragraph_id": 45,
"text": "However, if a C*-algebra has non-type I representations, then by results of James Glimm it also has representations of type II and type III. Thus for C*-algebras and locally compact groups, it is only meaningful to speak of type I and non type I properties.",
"title": "Type for C*-algebras"
},
{
"paragraph_id": 46,
"text": "In quantum mechanics, one typically describes a physical system with a C*-algebra A with unit element; the self-adjoint elements of A (elements x with x* = x) are thought of as the observables, the measurable quantities, of the system. A state of the system is defined as a positive functional on A (a C-linear map φ : A → C with φ(u*u) ≥ 0 for all u ∈ A) such that φ(1) = 1. The expected value of the observable x, if the system is in state φ, is then φ(x).",
"title": "C*-algebras and quantum field theory"
},
{
"paragraph_id": 47,
"text": "This C*-algebra approach is used in the Haag-Kastler axiomatization of local quantum field theory, where every open set of Minkowski spacetime is associated with a C*-algebra.",
"title": "C*-algebras and quantum field theory"
}
] | In mathematics, specifically in functional analysis, a C∗-algebra (pronounced "C-star") is a Banach algebra together with an involution satisfying the properties of the adjoint. A particular case is that of a complex algebra A of continuous linear operators on a complex Hilbert space with two additional properties: A is a topologically closed set in the norm topology of operators.
A is closed under the operation of taking adjoints of operators. Another important class of non-Hilbert C*-algebras includes the algebra C 0 of complex-valued continuous functions on X that vanish at infinity, where X is a locally compact Hausdorff space. C*-algebras were first considered primarily for their use in quantum mechanics to model algebras of physical observables. This line of research began with Werner Heisenberg's matrix mechanics and in a more mathematically developed form with Pascual Jordan around 1933. Subsequently, John von Neumann attempted to establish a general framework for these algebras, which culminated in a series of papers on rings of operators. These papers considered a special class of C*-algebras which are now known as von Neumann algebras. Around 1943, the work of Israel Gelfand and Mark Naimark yielded an abstract characterisation of C*-algebras making no reference to operators on a Hilbert space. C*-algebras are now an important tool in the theory of unitary representations of locally compact groups, and are also used in algebraic formulations of quantum mechanics. Another active area of research is the program to obtain classification, or to determine the extent of which classification is possible, for separable simple nuclear C*-algebras. | 2001-11-20T00:52:14Z | 2023-12-07T14:57:36Z | [
"Template:Short description",
"Template:About",
"Template:Reflist",
"Template:Functional analysis",
"Template:Authority control",
"Template:Use American English",
"Template:More citations needed",
"Template:Nowrap",
"Template:Harvnb",
"Template:Citation",
"Template:Springer",
"Template:Spectral theory"
] | https://en.wikipedia.org/wiki/C*-algebra |
7,185 | London Borough of Croydon | The London Borough of Croydon (pronunciation) is a London borough in south London, part of Outer London. It covers an area of 87 km (33.6 sq mi). It is the southernmost borough of London. At its centre is the historic town of Croydon from which the borough takes its name; while other urban centres include Coulsdon, Purley, South Norwood, Norbury, New Addington, Selsdon and Thornton Heath. Croydon is mentioned in Domesday Book, and from a small market town has expanded into one of the most populous areas on the fringe of London. The borough is now one of London's leading business, financial and cultural centres, and its influence in entertainment and the arts contribute to its status as a major metropolitan centre. Its population is 390,719, making it the largest London borough and sixteenth largest English district.
The borough was formed in 1965 from the merger of the County Borough of Croydon with Coulsdon and Purley Urban District, both of which had been within Surrey. The local authority, Croydon London Borough Council, is now part of London Councils, the local government association for Greater London. The economic strength of Croydon dates back mainly to Croydon Airport which was a major factor in the development of Croydon as a business centre. Once London's main airport for all international flights to and from the capital, it was closed on 30 September 1959 due to the lack of expansion space needed for an airport to serve the growing city. It is now a Grade II listed building and tourist attraction. Croydon Council and its predecessor Croydon Corporation unsuccessfully applied for city status in 1954, 2000, 2002 and 2012. The area is currently going through a large regeneration project called Croydon Vision 2020 which is predicted to attract more businesses and tourists to the area as well as backing Croydon's bid to become "London's Third City" (after the City of London and Westminster). Croydon is mostly urban, though there are large suburban and rural uplands towards the south of the borough. Since 2003, Croydon has been certified as a Fairtrade borough by the Fairtrade Foundation. It was the first London borough to have Fairtrade status which is awarded on certain criteria.
The area is one of the hearts of culture in London and the South East of England. Institutions such as the major arts and entertainment centre Fairfield Halls add to the vibrancy of the borough. However, its famous fringe theatre, the Warehouse Theatre, went into administration in 2012 when the council withdrew funding, and the building itself was demolished in 2013. The Croydon Clocktower was opened by Queen Elizabeth II in 1994 as an arts venue featuring a library, the independent David Lean Cinema (closed by the council in 2011 after sixteen years of operating, but now partially reopened on a part-time and volunteer basis) and museum. From 2000 to 2010, Croydon staged an annual summer festival celebrating the area's black and Indian cultural diversity, with audiences reaching over 50,000 people.
Premier League football club Crystal Palace F.C. play at Selhurst Park in Selhurst, a stadium they have been based in since 1924. Other landmarks in the borough include Addington Palace, an eighteenth-century mansion which became the official second residence of six Archbishops of Canterbury, Shirley Windmill, one of the few surviving large windmills in Greater London built in the 1850s, and the BRIT School, a creative arts institute run by the BRIT Trust which has produced artists such as Adele, Amy Winehouse and Leona Lewis.
The London Borough of Croydon was formed in 1965 from the Coulsdon and Purley Urban District and the County Borough of Croydon. The name Croydon comes from Crogdene or Croindone, named by the Saxons in the 8th century when they settled here, although the area had been inhabited since prehistoric times. It is thought to derive from the Anglo-Saxon croeas deanas, meaning "the valley of the crocuses", indicating that, like Saffron Walden in Essex, it was a centre for the collection of saffron.
By the time of the Norman invasion Croydon had a church, a mill and around 365 inhabitants as recorded in the Domesday Book. The Archbishop of Canterbury, Archbishop Lanfranc lived at Croydon Palace which still stands. Visitors included Thomas Becket (another Archbishop), and royal figures such as Henry VIII of England and Elizabeth I. The royal charter for Surrey Street Market dates back to 1276,
Croydon carried on through the ages as a prosperous market town, they produced charcoal, tanned leather, and ventured into brewing. Croydon was served by the Surrey Iron Railway, the first public railway (horse drawn) in the world, in 1803, and by the London to Brighton rail link in the mid-19th century, helping it to become the largest town in what was then Surrey.
In the 20th century Croydon became known for industries such as metal working, car manufacture and its aerodrome, Croydon Airport. Starting out during World War I as an airfield for protection against Zeppelins, an adjacent airfield was combined, and the new aerodrome opened on 29 March 1920. It became the largest in London, and was the main terminal for international air freight into the capital. It developed into one of the great airports of the world during the 1920s and 1930s, and welcomed the world's pioneer aviators in its heyday. British Airways Ltd used the airport for a short period after redirecting from Northolt Aerodrome, and Croydon was the operating base for Imperial Airways. It was partly due to the airport that Croydon suffered heavy bomb damage during World War II. As aviation technology progressed, however, and aircraft became larger and more numerous, it was recognised in 1952 that the airport would be too small to cope with the ever-increasing volume of air traffic. The last scheduled flight departed on 30 September 1959. It was superseded as the main airport by both London Heathrow and London Gatwick Airport (see below). The air terminal, now known as Airport House, has been restored, and has a hotel and museum in it.
In the late 1950s and through the 1960s the council commercialised the centre of Croydon with massive development of office blocks and the Whitgift Centre which was formerly the biggest in-town shopping centre in Europe. The centre was officially opened in October 1970 by the Duchess of Kent. The original Whitgift School there had moved to Haling Park, South Croydon in the 1930s; the replacement school on the site, Whitgift Middle School, now the Trinity School of John Whitgift, moved to Shirley Park in the 1960s, when the buildings were demolished.
The borough council unsuccessfully applied for city status in 1965, 2000 and again in 2002. If it had been successful, it would have been the third local authority in Greater London to hold that status, along with the City of London and the City of Westminster. At present the London Borough of Croydon is the second most populous local government district of England without city status, Kirklees being the first. Croydon's applications were refused as it was felt not to have an identity separate from the rest of Greater London. In 1965 it was described as "...now just part of the London conurbation and almost indistinguishable from many of the other Greater London boroughs" and in 2000 as having "no particular identity of its own".
Croydon, in common with many other areas, was hit by extensive rioting in August 2011. Reeves, an historic furniture store established in 1867, that gave its name to a junction and tram stop in the town centre, was destroyed by arson.
Croydon is currently going through a vigorous regeneration plan, called Croydon Vision 2020. This will change the urban planning of central Croydon completely. Its main aims are to make Croydon London's Third City and the hub of retail, business, culture and living in south London and South East England. The plan was showcased in a series of events called Croydon Expo. It was aimed at business and residents in the London Borough of Croydon, to demonstrate the £3.5bn development projects the Council wishes to see in Croydon in the next ten years.
There have also been exhibitions for regional districts of Croydon, including Waddon, South Norwood and Woodside, Purley, New Addington and Coulsdon. Examples of upcoming architecture featured in the expo can easily be found to the centre of the borough, in the form of the Croydon Gateway site and the Cherry Orchard Road Towers.
Croydon London Borough Council has seventy councillors elected in 28 wards.
From the borough's creation in 1965 until 1994 the council saw continuous control under first Conservatives and Residents' Ratepayers councillors up to 1986 and then Conservatives. From 1994 to 2006 Labour Party councillors controlled the council. After a further eight-year period of Conservative control the Labour group secured a ten-seat majority in the local council elections on 22 May 2014. Councillor Tony Newman returned to lead the council for Labour. Labour remained in power until the 2022 election where no party had overall control. However, the party holding the executive Mayor, and as a result executive power, is the Conservative Party. Since the 2022 Croydon London Borough Council election the composition of the council is as follows:
A campaign group supporting an elected mayor for Croydon called DEMOC started a petition in February 2020, which they submitted to the council in September 2020. The mayoral system would replace the leader-and-cabinet system, whereby the leader of the council is chosen by the majority party or coalition of parties. The referendum was held in October 2021, resulting in a majority in favour of the mayoral system, with more than 80% of valid votes being cast in favour of the change.
The first elected mayor is the Conservative, Jason Perry. elected on 9 May 2022. The Deputy Mayor is Cllr. Lynne Hale. The Chief Executive since 14 September 2020 has been Katherine Kerswell.
The borough is covered by three parliamentary constituencies: these are Croydon North, Croydon Central and Croydon South.
For much of its history, Croydon Council was controlled by the Conservative Party or independents. Former Croydon councillors include former MPs Andrew Pelling, Vivian Bendall, David Congdon, Geraint Davies and Reg Prentice, London Assembly member Valerie Shawcross, Lord Bowness, John Donaldson, Baron Donaldson of Lymington (Master of the Rolls) and H.T. Muggeridge, MP and father of Malcolm Muggeridge. The first Mayor of the newly created county borough was Jabez Balfour, later a disgraced Member of Parliament. Former Conservative Director of Campaigning, Gavin Barwell, was a Croydon councillor between 1998 and 2010 and was the MP for Croydon Central from 2010 until 2017. Sarah Jones (politician) won the Croydon Central seat for Labour in 2017. Croydon North has a Labour MP, Steve Reed (politician), and Croydon South has a Conservative MP, Chris Philp.
Some 10,000 people work directly or indirectly for the council, at its main offices at Bernard Weatherill House or in its schools, care homes, housing offices or work depots.
Croydon Town Hall on Katharine Street in Central Croydon houses the committee rooms, the mayor's and other councillors' offices, electoral services and the arts and heritage services. The present Town Hall is Croydon's third. The first town hall is thought to have been built in either 1566 or 1609. The second was built in 1808 to serve the growing town but was demolished after the present town hall was erected in 1895.
The 1808 building cost £8,000, which was regarded as an enormous sum for those days and was perhaps as controversial as the administrative building Bernard Weatherill House opened for occupation in 2013 and reputed to have cost £220,000,000. The early 19th century building was known initially as "Courthouse" as, like its predecessor and successor, the local court met there. The building stood on the western side of the High Street near to the junction with Surrey Street, the location of the town's market. The building became inadequate for the growing local administrative responsibilities and stood at a narrow point of a High Street in need of widening.
The present town hall was designed by local architect Charles Henman and was officially opened by the Prince and Princess of Wales on 19 May 1896. It was constructed in red brick, sourced from Wrotham in Kent, with Portland stone dressings and green Westmoreland slates for the roof. It also housed the court and most central council employees.
The Borough's incorporation in 1883 and a desire to improve central Croydon with improvements to traffic flows and the removal of social deprivation in Middle Row prompted the move to a new configuration of town hall provision. The second closure of the Central Railway Station provided the corporation with the opportunity to buy the station land from the London, Brighton and South Coast Railway Company for £11,500 to provide the site for the new town hall. Indeed, the council hoped to be able to sell on some of the land purchased with enough for municipal needs and still "leave a considerable margin of land which might be disposed of". The purchase of the failed railway station came despite local leaders having successfully urged the re-opening of the poorly patronised railway station. The railway station re-opening had failed to be a success so freeing up the land for alternative use.
Parts, including the former court rooms, have been converted into the Museum of Croydon and exhibition galleries. The original public library was converted into the David Lean Cinema, part of the Croydon Clocktower. The Braithwaite Hall is used for events and performances. The town hall was renovated in the mid-1990s and the imposing central staircase, long closed to the public and kept for councillors only, was re-opened in 1994. The civic complex, meanwhile, was substantially added to, with buildings across Mint Walk and the 19-floor Taberner House to house the rapidly expanding corporation's employees.
Ruskin House is the headquarters of Croydon's Labour, Trade Union and Co-operative movements and is itself a co-operative with shareholders from organisations across the three movements. In the 19th century, Croydon was a bustling commercial centre of London. It was said that, at the turn of the 20th century, approximately £10,000 was spent in Croydon's taverns and inns every week. For the early labour movement, then, it was natural to meet in the town's public houses, in this environment. However, the temperance movement was equally strong, and Georgina King Lewis, a keen member of the Croydon United Temperance Council, took it upon herself to establish a dry centre for the labour movement. The first Ruskin House was highly successful, and there has been two more since. The current house was officially opened in 1967 by the then Labour Prime Minister, Harold Wilson. Today, Ruskin House continues to serve as the headquarters of the Trade Union, Labour and Co-operative movements in Croydon, hosting a range of meetings and being the base for several labour movement groups. Office tenants include the headquarters of the Communist Party of Britain and Croydon Labour Party. Geraint Davies, the MP for Croydon Central, had offices in the building, until he was defeated by Andrew Pelling and is now the Labour representative standing for Swansea West in Wales.
Taberner House was built between 1964 and 1967, designed by architect H. Thornley, with Allan Holt and Hugh Lea as borough engineers. Although the council had needed extra space since the 1920s, it was only with the imminent creation of the London Borough of Croydon that action was taken. The building, being demolished in 2014, was in classic 1960s style, praised at the time but subsequently much derided. It has its elegant upper slab block narrowing towards both ends, a formal device which has been compared to the famous Pirelli Tower in Milan. It was named after Ernest Taberner OBE, Town Clerk from 1937 to 1963. Until September 2013, Taberner House housed most of the council's central employees and was the main location for the public to access information and services, particularly with respect to housing.
In September 2013, Council staff moved into Bernard Weatherill House in Fell Road, (named after the former Speaker of the House and Member of Parliament for Croydon North-East). Staff from the Met Police, NHS, Jobcentre Plus, Croydon Credit Union, Citizens Advice Bureau as well as 75 services from the council all moved to the new building.
For elections to the Greater London Council, the borough formed the Croydon electoral division, electing four members. In 1973 it was divided into the single-member Croydon Central, Croydon North East, Croydon North West and Croydon South electoral divisions. The Greater London Council was abolished in 1986.
Since 2000, for elections to the London Assembly, the borough forms part of the Croydon and Sutton constituency.
Private Eye magazine has named Croydon the most rotten borough in Britain six years in a row (2017–2022).
The borough is in the far south of London, with the M25 orbital motorway stretching to the south of it, between Croydon and Tandridge. To the north and east, the borough mainly borders the London Borough of Bromley, and in the north west the boroughs of Lambeth and Southwark. The boroughs of Sutton and Merton are located directly to the west. It is at the head of the River Wandle, just to the north of a significant gap in the North Downs. It lies 10 miles (16 km) south of Central London, and the earliest settlement may have been a Roman staging post on the London-Portslade road, although conclusive evidence has not yet been found. The main town centre houses a great variety of well-known stores on North End and two shopping centres. It was pedestrianised in 1989 to attract people back to the town centre. Another shopping centre called Park Place was due to open in 2012 but has since been scrapped.
The CR postcode area covers most of the south and centre of the borough while the SE and SW postcodes cover the northern parts, including Crystal Palace, Upper Norwood, South Norwood, Selhurst (part), Thornton Heath (part), Norbury and Pollards Hill (part).
Districts in the London Borough of Croydon include Addington, a village to the east of Croydon which until 2000 was poorly linked to the rest of the borough as it was without any railway or light rail stations, with only a few patchy bus services. Addiscombe is a district just northeast of the centre of Croydon, and is popular with commuters to central London as it is close to the busy East Croydon station. Ashburton, to the northeast of Croydon, is mostly home to residential houses and flats, being named after Ashburton House, one of the three big houses in the Addiscombe area. Broad Green is a small district, centred on a large green with many homes and local shops in West Croydon. Coombe is an area, just east of Croydon, which has barely been urbanised and has retained its collection of large houses fairly intact. Coulsdon, south west of Central Croydon, which has retained a good mix of traditional high street shops as well as a large number of restaurants for its size. Croydon is the principal area of the borough, Crystal Palace is an area north of Croydon, which is shared with the London Boroughs of Lambeth, Southwark, Lewisham and Bromley. Fairfield, just northeast of Croydon, holds the Fairfield Halls and the village of Forestdale, to the east of Croydon's main area, commenced work in the late 1960s and completed in the mid-70s to create a larger town on what was previously open ground. Hamsey Green is a place on the plateau of the North Downs, south of Croydon. Kenley, again south of the centre, lie within the London Green Belt and features a landscape dominated by green space. New Addington, to the east, is a large local council estate surrounded by open countryside and golf courses. Norbury, to the northwest, is a suburb with a large ethnic population. Norwood New Town is a part of the Norwood triangle, to the north of Croydon. Monks Orchard is a small district made up of large houses and open space in the northeast of the borough. Pollards Hill is a residential district with houses on roads, which are lined with pollarded lime trees, stretching to Norbury. Purley, to the south, is a main town whose name derives from "pirlea", which means 'Peartree lea'. Sanderstead, to the south, is a village mainly on high ground at the edge of suburban development in Greater London. Selhurst is a town, to the north of Croydon, which holds the nationally known school, The BRIT School. Selsdon is a suburb which was developed during the inter-war period in the 1920s and 1930s, and is remarkable for its many Art Deco houses, to the southeast of Croydon Centre. Shirley, is to the east of Croydon, and holds Shirley Windmill. South Croydon, to the south of Croydon, is a locality which holds local landmarks such as The Swan and Sugarloaf public house and independent Whitgift School part of the Whitgift Foundation. South Norwood, to the north, is in common with West Norwood and Upper Norwood, named after a contraction of Great North Wood and has a population of around 14,590. Thornton Heath is a town, to the northwest of Croydon, which holds Croydon's principal hospital Mayday. Upper Norwood is north of Croydon, on a mainly elevated area of the borough. Waddon is a residential area, mainly based on the Purley Way retail area, to the west of the borough. Woodside is located to the northeast of the borough, with streets based on Woodside Green, a small sized area of green land. And finally Whyteleafe is a town, right to the edge of Croydon with some areas in the Surrey district of Tandridge.
Croydon is a gateway to the south from central London, with some major roads running through it. Purley Way, part of the A23, was built to by-pass Croydon town centre. It is one of the busiest roads in the borough, and is the site of several major retail developments including one of only 18 IKEA stores in the country, built on the site of the former power station. The A23 continues southward as Brighton Road, which is the main route running towards the south from Croydon to Purley. The centre of Croydon is very congested, and the urban planning has since become out of date and quite inadequate, due to the expansion of Croydon's main shopping area and office blocks. Wellesley Road is a north–south dual carriageway that cuts through the centre of the town, and makes it hard to walk between the town centre's two railway stations. Croydon Vision 2020 includes a plan for a more pedestrian-friendly replacement. It has also been named as one of the worst roads for cyclists in the area. Construction of the Croydon Underpass beneath the junction of George Street and Wellesley Road/Park Lane started in the early 1960s, mainly to alleviate traffic congestion on Park Lane, above the underpass. The Croydon Flyover is also near the underpass, and next to Taberner House. It mainly leads traffic on to Duppas Hill, towards Purley Way with links to Sutton and Kingston upon Thames. The major junction on the flyover is for Old Town, which is also a large three-lane road.
Croydon covers an area of 86.52 km. Croydon's physical features consist of many hills and rivers that are spread out across the borough and into the North Downs, Surrey and the rest of south London. Addington Hills is a major hilly area to the south of London and is recognised as a significant obstacle to the growth of London from its origins as a port on the north side of the river, to a large circular city. The Great North Wood is a former natural oak forest that covered the Sydenham Ridge and the southern reaches of the River Effra and its tributaries.
The most notable tree, called Vicar's Oak, marked the boundary of four ancient parishes; Lambeth, Camberwell, Croydon and Bromley. John Aubrey referred to this "ancient remarkable tree" in the past tense as early as 1718, but according to JB Wilson, the Vicar's Oak survived until 1825. The River Wandle is also a major tributary of the River Thames, where it stretches to Wandsworth and Putney for 9 miles (14 km) from its main source in Waddon.
Croydon has a temperate climate in common with most areas of Great Britain: its Köppen climate classification is Cfb. Its mean annual temperature of 9.6 °C is similar to that experienced throughout the Weald, and slightly cooler than nearby areas such as the Sussex coast and central London. Rainfall is considerably below England's average (1971–2000) level of 838 mm, and every month is drier overall than the England average.
The nearest weather station is at Gatwick Airport.
The skyline of Croydon has significantly changed over the past 50 years. High rise buildings, mainly office blocks, now dominate the skyline. The most notable of these buildings include Croydon Council's headquarters Taberner House, which has been compared to the famous Pirelli Tower of Milan, and the Nestlé Tower, the former UK headquarters of Nestlé.
In recent years, the development of tall buildings, such as the approved Croydon Vocational Tower and Wellesley Square, has been encouraged in the London Plan, and will lead to the erection of new skyscrapers in the coming years as part of London's high-rise boom.
No. 1 Croydon, formerly the NLA Tower, Britain's 88th tallest tower, close to East Croydon station, is an example of 1970s architecture. The tower was originally nicknamed the Threepenny bit building, as it resembles a stack of pre-decimalisation Threepence coins, which were 12-sided. It is now most commonly called The Octagon, being 8-sided.
Lunar House is another high-rise building. Like other government office buildings on Wellesley Road, such as Apollo House, the name of the building was inspired by the US Moon landings (In the Croydon suburb of New Addington there is a public house, built during the same period, called The Man on the Moon). Lunar House houses the Home Office building for Visas and Immigration. Apollo House houses The Border Patrol Agency.
A new generation of buildings are being considered by the council as part of Croydon Vision 2020, so that the borough doesn't lose its title of having the "largest office space in the south east", excluding central London. Projects such as Wellesley Square, which will be a mix of residential and retail with an eye-catching colour design and 100 George Street a proposed modern office block are incorporated in this vision.
Notable events that have happened to Croydon's skyline include the Millennium project to create the largest single urban lighting project ever. It was created for the buildings of Croydon to illuminate them for the third millennium. The project provided new lighting for the buildings, and provided an opportunity to project images and words onto them, mixing art and poetry with coloured light, and also displaying public information after dark. Apart from increasing night time activity in Croydon and thereby reducing the fear of crime, it helped to promote the sustainable use of older buildings by displaying them in a more positive way.
There are a large number of attractions and places of interest all across the borough of Croydon, ranging from historic sites in the north and south to modern towers in the centre.
Croydon Airport was once London's main airport, but closed on 30 September 1959 due to the expansion of London and because it didn't have room to grow; so Heathrow International Airport took over as London's main airport. It has now been mostly converted to offices, although some important elements of the airport remain. It is a tourist attraction.
The Croydon Clocktower arts venue was opened by Elizabeth II in 1994. It includes the Braithwaite Hall (the former reference library - named after the Rev. Braithwaite who donated it to the town) for live events, David Lean Cinema (built in memory of David Lean), the Museum of Croydon and Croydon Central Library. The Museum of Croydon (formerly known as Croydon Lifetimes Museum) highlights Croydon in the past and the present and currently features high-profile exhibitions including the Riesco Collection, The Art of Dr Seuss and the Whatever the Weather gallery. Shirley Windmill is a working windmill and one of the few surviving large windmills in Surrey, built in 1854. It is Grade II listed and received a £218,100 grant from the Heritage Lottery Fund. Addington Palace is an 18th-century mansion in Addington which was originally built as Addington Place in the 16th century. The palace became the official second residence of six archbishops, five of whom are buried in St Mary's Church and churchyard nearby.
North End is the main pedestrianised shopping road in Croydon, having Centrale to one side and the Whitgift Centre to the other. The Warehouse Theatre is a popular theatre for mostly young performers and is due to get a face-lift on the Croydon Gateway site.
The Nestlé Tower was the UK headquarters of Nestlé and is one of the tallest towers in England, which is due to be re-fitted during the Park Place development. The Fairfield Halls is a well known concert hall and exhibition centre, opened in 1962. It is frequently used for BBC recordings and was formerly the home of ITV's World of Sport. It includes the Ashcroft Theatre and the Arnhem Gallery.
Croydon Palace was the summer residence of the Archbishop of Canterbury for over 500 years and included regular visitors such as Henry III and Queen Elizabeth I. It is thought to have been built around 960. Croydon Cemetery is a large cemetery and crematorium west of Croydon and is most famous for the gravestone of Derek Bentley, who was wrongly hanged in 1953. Mitcham Common is an area of common land partly shared with the boroughs of Sutton and Merton. Almost 500,000 years ago, Mitcham Common formed part of the river bed of the River Thames.
The BRIT School is a performing Arts & Technology school, owned by the BRIT Trust (known for the BRIT Awards Music Ceremony). Famous former students include Kellie Shirley, Amy Winehouse, Leona Lewis, Adele, Kate Nash, Dane Bowers, Katie Melua and Lyndon David-Hall. Grants is an entertainment venue in the centre of Croydon which includes a Vue cinema.
Surrey Street Market has roots in the 13th century, or earlier, and was chartered by the Archbishop of Canterbury in 1276. The market is regularly used as a location for TV, film and advertising. Croydon Minster, formerly the parish church, was established in the Anglo-Saxon period, and parts of the surviving building (notably the tower) date from the 14th and 15th centuries. However, the church was largely destroyed by fire in 1867, so the present structure is a rebuild of 1867–69 to the designs of George Gilbert Scott. It is the burial place of six archbishops, and contains monuments to Archbishops Sheldon and Whitgift.
The table shows population change since 1801, including the percentage change since previous census. Although the London Borough of Croydon has existed only since 1965, earlier figures have been generated by combining data from the towns, villages, and civil parishes that would later be absorbed into the authority.
According to the 2011 census, Croydon had a population of 363,378, making Croydon the most populated borough in Greater London. The estimated population in 2017 was around 384,800. 186,900 were males, with 197,900 females. The density was 4,448 inhabitants per km. 248,200 residents of Croydon were between the age of 16 and 64.
In 2011, white was the majority ethnicity with 55.1%. Black was the second-largest ethnicity with 20.2%; 16.4% were Asian and 8.3% stated to be something other.
The most common householder type were owner occupied with only a small percentage rented. Many new housing schemes and developments are currently taking place in Croydon, such as The Exchange and Bridge House, IYLO, Wellesley Square (now known as Saffron Square) and Altitude 25. In 2006, The Metropolitan Police recorded a 10% fall in the number of crimes committed in Croydon, better than the rate which crime in London as a whole is falling. Croydon has had the highest fall in the number of cases of violence against the person in south London, and is one of the top 10 safest local authorities in London. According to Your Croydon (a local community magazine) this is due to a stronger partnership struck between Croydon Council and the police. In 2007, overall crime figures across the borough saw decrease of 5%, with the number of incidents decreasing from 32,506 in 2006 to 30,862 in 2007. However, in the year ending April 2012, The Metropolitan Police recorded the highest rates for murder and rape throughout London in Croydon, accounting for almost 10% of all murders, and 7% of all rapes. Croydon has five police stations. Croydon police station is on Park Lane in the centre of the town near the Fairfield Halls; South Norwood police station is a newly refurbished building just off the High Street; Norbury police station is on London Road; Kenley station is on Godstone Road; and New Addington police station is on Addington Village road.
The predominant religion of the borough is Christianity. According to the 2021 United Kingdom census, the borough has over 190,880 Christians, mainly Protestants. This is the largest religious following in the borough followed by Islam with 40,717 Muslims resident.
101,119 Croydon residents stated that they are atheist or non-religious in the 2021 Census.
Croydon Minster is the most notable of the borough's 35 churches. This church was founded in Saxon times, since there is a record of "a priest of Croydon" in 960, although the first record of a church building is in the Domesday Book (1086). In its final medieval form, the church was mainly a Perpendicular-style structure, but this was severely damaged by fire in 1867, following which only the tower, south porch and outer walls remained. Under the direction of Sir George Gilbert Scott the church was rebuilt, incorporating the remains and essentially following the design of the medieval building, and was reconsecrated in 1870. It still contains several important monuments and fittings saved from the old church.
The Area Bishop of Croydon is a position as a suffragan Bishop in the Anglican Diocese of Southwark. The present bishop is the Right Reverend Jonathan Clark.
The main employment sectors of the Borough is retail and enterprise which is mainly based in Central Croydon. Major employers are well-known companies, who hold stores or offices in the town. Purley Way is a major employer of people, looking for jobs as sales assistants, sales consultants and store managerial jobs. IKEA Croydon, when it was built in 1992, brought many non-skilled jobs to Croydon. The store, which is a total size of 23,000 m, took over the former site of Croydon Power station, which had led to the unemployment of many skilled workers. In May 2006, the extension of the IKEA made it the fifth biggest employer in Croydon, and includes the extension of the showroom, market hall and self-serve areas.
Other big employers around Purley include the large Tesco Extra store in the town centre, along with other stores in Purley Way including Sainsbury's, B&Q and Vue. Croydon town centre is also a major retail centre, and home to many high street and department stores as well as designer boutiques. The main town centre shopping areas are on the North End precinct, in the Whitgift Centre, Centrale and St George's Walk. Department stores in Croydon town centre include House of Fraser, Marks and Spencer, Allders, Debenhams and T.K. Maxx. Croydon's main market is Surrey Street Market, which has a royal charter dating back to 1276. Shopping areas outside the town centre include the Valley Park retail complex, Croydon Colonnades, Croydon Fiveways, and the Waddon Goods Park.
In research from 2010 on retail footprint, Croydon came out as 29th in terms of retail expenditure at £770 million. This puts it 6th in the Greater London area, falling behind Kingston upon Thames and Westfield London. In 2005, Croydon came 21st, second in London behind the West End, with £909 million, whilst Kingston was 24th with £864 million. In a 2004 survey on the top retail destinations, Croydon was 27th.
In 2007, Croydon leapt up the annual business growth league table, with a 14% rise in new firms trading in the borough after 125 new companies started up, increasing the number from 900 to 1,025, enabling the town, which has also won the Enterprising Britain Award and "the most enterprising borough in London" award, to jump from 31 to 14 in the table.
Croydon is home to a variety of international business communities, each with dynamic business networks, so businesses located in Croydon are in a good position to make the most of international trade and recruit from a labour force fluent in 130 languages.
Tramlink created many jobs when it opened in 2000, not only drivers but engineers as well. Many of the people involved came from Croydon, which was the original hub of the system. Retail stores inside both Centrale and the Whitgift Centre as well as on North End employee people regularly and create many jobs, especially at Christmas. As well as the new building of Park Place, which will create yet more jobs, so will the regeneration of Croydon, called Croydon Vision 2020, highlighted in the Croydon Expo which includes the Croydon Gateway, Wellesley Square, Central One plus much more.
Croydon is a major office area in the south east of England, being the largest outside of central London. Many powerful companies based in Europe and worldwide have European or British headquarters in the town. American International Group (AIG) have offices in No. 1 Croydon, formerly the NLA Tower, shared with Liberata, Pegasus and the Institute of Public Finance. AIG is the sixth-largest company in the world according to the 2007 Forbes Global 2000 list. The Swiss company Nestlé has its UK headquarters in the Nestlé Tower, on the site of the formerly proposed Park Place shopping centre. Real Digital International has developed a purpose built 70,000 sq ft (6,500 m) factory on Purley Way equipped with "the most sophisticated production equipment and technical solutions". ntl:Telewest, now Virgin Media, have offices at Communications House, from the Telewest side when it was known as Croydon Cable.
The Home Office UK Visas and Immigration department has its headquarters in Lunar House in Central Croydon. In 1981, Superdrug opened a 11,148 m (120,000 sq ft) distribution centre and office complex at Beddington Lane. The head office of international engineering and management consultant Mott MacDonald is located in Mott MacDonald House on Sydenham Road, one of four offices they occupy in the town centre. BT has large offices in Prospect East in Central Croydon. The Royal Bank of Scotland also has large offices in Purley, south of Croydon. Direct Line also has an office opposite Taberner House. Other companies with offices in Croydon include Lloyds TSB, Merrill Lynch and Balfour Beatty. Ann Summers used to have its headquarters in the borough but has moved to the Wapses Lodge Roundabout in Tandridge.
The Council declared bankruptcy via a section 114 notice in December 2020.
East Croydon and West Croydon are the main stations in the borough. South Croydon railway station is also a railway station in Croydon, but it is lesser known.
East Croydon is served by Govia Thameslink Railway, operating under the Southern and Thameslink brands. Services travel via the Brighton Main Line north to London Victoria, London Bridge, London St Pancras, Luton Airport, Bedford, Cambridge and Peterborough and south to Gatwick Airport, Ore, Brighton, Littlehampton, Bognor Regis, Southampton and Portsmouth. East Croydon is the largest and busiest station in Croydon and the third busiest in London, excluding Travelcard Zone 1.
East Croydon was served by long distance Arriva CrossCountry services to Birmingham and the North of England until they were withdrawn in December 2008.
West Croydon is served by London Overground and Southern services north to Highbury & Islington, London Bridge and London Victoria, and south to Sutton and Epsom Downs.
South Croydon is mainly served by Network Rail services operated by Southern for suburban lines to and from London Bridge, London Victoria and the eastern part of Surrey.
Croydon is one of only five London Boroughs not to have at least one London Underground station within its boundaries, with the closest tube station being Morden.
A sizeable bus infrastructure which is part of the London Buses network operates from a hub at West Croydon bus station. The original bus station opened in May 1985, closing in October 2014. A new bus station opened in October 2016.
Addington Village Interchange is a regional bus terminal in Addington Village which has an interchange between Tramlink and bus services in the remote area. Services are operated under contract by Abellio London, Arriva London, London Central, Metrobus, Quality Line and Selkent.
The Tramlink light rail system opened in 2000, serving the borough and surrounding areas. Its network consists of three lines, from Elmers End to West Croydon, from Beckenham to West Croydon, and from New Addington to Wimbledon, with all three lines running via the Croydon loop on which it is centred. It is also the only tram system in London but there is another light rail system, the Docklands Light Railway. It serves Mitcham, Woodside, Addiscombe and the Purley Way retail and industrial area amongst others.
Croydon is linked into the national motorway network via the M23 and M25 orbital motorway. The M25 skirts the south of the borough, linking Croydon with other parts of London and the surrounding counties; the M23 branches from the M25 close to Coulsdon, linking the town with the south coast, Crawley, Reigate, and Gatwick Airport. The A23 connects the borough with the motorways. The A23 is the major trunk road through Croydon, linking it with central London, East Sussex, Horsham, and Littlehaven. The old London to Brighton road, passes through the west of the borough on Purley Way, bypassing the commercial centre of Croydon which it once did.
The A22 and A23 are the major trunk roads through Croydon. These both run north–south, connecting to each other in Purley. The A22 connects Croydon, its starting point, to East Grinstead, Tunbridge Wells, Uckfield, and Eastbourne. Other major roads generally radiate spoke-like from the town centre. The A23 road, cuts right through Croydon, and it starts from London and links to Brighton and Gatwick Airport .Wellesley Road is an urban dual carriageway which cuts through the middle of the central business district. It was constructed in the 1960s as part of a planned ring road for Croydon and includes an underpass, which allows traffic to avoid going into the town centre.
The closest international airport to Croydon is Gatwick Airport, which is located 19 miles (31 km) from the town centre. Gatwick Airport opened in August 1930 as an aerodrome and is a major international operational base for British Airways, EasyJet and Virgin Atlantic. It currently handles around 35 million passengers a year, making it London's second largest airport, and the second busiest airport in the United Kingdom after Heathrow. Heathrow, London City and Luton airports all lie within a two hours' drive of Croydon. Gatwick and Luton Airports are connected to Croydon by frequent direct trains, while Heathrow is accessible by the route SL7 bus.
Although hilly, Croydon is compact and has few major trunk roads running through it. It is on one of the Connect2 schemes which are part of the National Cycle Network route running around Croydon. The North Downs, an area of outstanding natural beauty popular with both on- and off-road cyclists, is so close to Croydon that part of the park lies within the borough boundary, and there are routes into the park almost from the civic centre.
In March 2011, the main forms of transport that residents used to travel to work were: driving a car or van, 20.2% of all residents aged 16–74; train, 59.5%; bus, minibus or coach, 7.5%; on foot, 5.1%; underground, metro, light rail, tram, 4.3%; work mainly at or from home, 2.9%; passenger in a car or van, 1.5%.
Home Office policing in Croydon is provided by the Metropolitan Police. The force's Croydon arm have their head offices for policing on Park Lane next to the Fairfield Halls and Croydon College in central Croydon. Public transport is co-ordinated by Transport for London. Statutory emergency fire and rescue service is provided by the London Fire Brigade, which has five stations in Croydon.
NHS South West London Clinical Commissioning Group (A merger of the previous NHS Croydon CCG and others in South West London) is the body responsible for public health and for planning and funding health services in the borough. Croydon has 227 GPs in 64 practices, 156 dentists in 51 practices, 166 pharmacists and 70 optometrists in 28 practices.
Croydon University Hospital, formerly known as Mayday Hospital, built on a 19-acre (7.7 ha) site in Thornton Heath at the west of Croydon's boundaries with Merton, is a large NHS hospital administered by Croydon Health Services NHS Trust. Former names of the hospital include the Croydon Union Infirmary from 1885 to 1923 and the Mayday Road Hospital from 1923 to around 1930. It is a District General Hospital with a 24-hour accident and emergency department. NHS Direct has a regional centre based at the hospital. The NHS Trust also provides services at Purley War Memorial Hospital, in Purley. Croydon General Hospital was on London Road but services transferred to Mayday, as the size of this hospital was insufficient to cope with the growing population of the borough. Sickle Cell and Thalassaemia Centre and the Emergency Minor Treatment Centre are other smaller hospitals operated by the Mayday in the borough. Cane Hill was a psychiatric hospital in Coulsdon.
Waste management is co-ordinated by the local authority. Unlike other waste disposal authorities in Greater London, Croydon's rubbish is collected independently and isn't part of a waste authority unit. Locally produced inert waste for disposal is sent to landfill in the south of Croydon. There have recently been calls by the ODPM to bring waste management powers to the Greater London Authority, giving it a waste function. The Mayor of London has made repeated attempts to bring the different waste authorities together, to form a single waste authority in London. This has faced significant opposition from existing authorities. However, it has had significant support from all other sectors and the surrounding regions managing most of London's waste. Croydon has the joint best recycling rate in London, at 36%, but the refuse collectors have been criticised for their rushed performance lacking quality. Croydon's distribution network operator for electricity is EDF Energy Networks; there are no power stations in the borough. Thames Water manages Croydon's drinking and waste water; water supplies being sourced from several local reservoirs, including Beckton and King George VI. Before 1971, Croydon Corporation was responsible for water treatment in the borough.
The borough of Croydon is 86.52 km, populating approximately 340,000 people. There are five fire stations within the borough; Addington (two pumping appliances), Croydon (two pumping appliances, incident response unit, fire rescue unit and a USAR appliance), Norbury (two pumping appliances), Purley (one pumping appliance) and Woodside (one pumping appliance). Purley has the largest station ground, but dealt with the fewest incidents during 2006/07.
The fire stations, as part of the Community Fire Safety scheme, visited 49 schools in 2006/2007.
The borough compared with the other London boroughs has the highest number of schools in it, with 26% of its population under 20 years old. They include primary schools (95), secondary schools (21) and four further education establishments. Croydon College has its main building in Central Croydon, it is a high rise building. John Ruskin College is one of the other colleges in the borough, located in Addington and Coulsdon College in Coulsdon. South Norwood has been the home of Spurgeon's College, a world-famous Baptist theological college, since 1923; Spurgeon's is located on South Norwood Hill and currently has some 1000 students. The London Borough of Croydon is the local education authority for the borough.
Overall, Croydon was ranked 77th out of all the local education authorities in the UK, up from 92nd in 2007. In 2007, the Croydon LEA was ranked 81st out of 149 in the country – and 21st in Greater London – based on the percentage of pupils attaining at least 5 A*–C grades at GCSE including maths and English (37.8% compared with the national average of 46.7%). The most successful public sector schools in 2010 were Harris City Academy Crystal Palace and Coloma Convent Girls' School. The percentage of pupils achieving 5 A*-C GCSEs including maths and English was above the national average in 2010.
The borough of Croydon has 14 libraries, a joint library and a mobile library. Many of the libraries were built a long time ago and therefore have become outdated, so the council started updating a few including Ashburton Library which moved from its former spot into the state-of-the-art Ashburton Learning Village complex which is on the former site of the old 'A Block' of Ashburton Community School which is now situated inside the centre. The library is now on one floor. This format was planned to be rolled out across all of the council's libraries but what was seen as costing too much.
South Norwood Library, New Addington Library, Shirley Library, Selsdon Library, Sanderstead Library, Broad Green, Purley Library, Coulsdon Library and Bradmore Green Library are examples of older council libraries. The main library is Croydon Central Library which holds many references, newspaper archives and a tourist information point (one of three in southeast London). Upper Norwood Library is a joint library with the London Borough of Lambeth. This means that both councils fund the library and its resources, but even though Lambeth have nearly doubled their funding for the library in the past several years Croydon has kept it the same, doubting the future of the library.
The borough has been criticised in the past for not having enough leisure facilities, maintaining the position of Croydon as a three star borough. Thornton Heath's ageing sports centre has been demolished and replaced by a newer more modern leisure centre. South Norwood Leisure Centre was closed down in 2006 so that it could be demolished and re-designed from scratch like Thornton Heath, at an estimated cost of around £10 million.
In May 2006 the Conservative Party took control of Croydon Council and decided a refurbishment would be more economical than rebuilding, this decision caused some controversy.
Sport Croydon, is the commercial arm for leisure in the borough. Fusion currently provides leisure services for the council, a contract previously held by Parkwood Leisure.
Football teams include Crystal Palace F.C., which play at Selhurst Park, and in the Premier League. AFC Croydon Athletic, whose nickname is The Rams, is a football club who play at Croydon Sports Arena along with Croydon F.C., both in the Combined Counties League and Holmesdale, who were founded in South Norwood but currently playing on Oakley Road in Bromley, currently in the Southern Counties East Football League.
Non-football teams that play in Croydon are Streatham-Croydon RFC, a rugby union club in Thornton Heath who play at Frant Road, as well as South London Storm Rugby League Club, based at Streatham's ground, who compete in the Rugby League Conference. Another rugby union club that play in Croydon is Croydon RFC, who play at Addington Road. The London Olympians are an American Football team that play in Division 1 South in the British American Football League. The Croydon Pirates are one of the most successful teams in the British Baseball Federation, though their ground is actually just located outside the borough in Sutton.
Croydon Amphibians SC plays in the Division 2 British Water Polo League. The team won the National League Division 2 in 2008.
Croydon has over 120 parks and open spaces, ranging from the 200-acre (0.81 km) Selsdon Wood Nature Reserve to many recreation grounds and sports fields scattered throughout the Borough.
Croydon has cut funding to the Warehouse Theatre.
In 2005, Croydon Council drew up a Public Art Strategy, with a vision intended to be accessible and to enhance people's enjoyment of their surroundings. The public art strategy delivered a new event called Croydon's Summer Festival hosted in Lloyd Park. The festival consists of two days of events. The first is called Croydon's World Party which is a free one-day event with three stages featuring world, jazz and dance music from the UK and internationally. The final days event is the Croydon Mela, a day of music with a mix of traditional Asian culture and east-meets-western club beats across four stages as well as dozens of food stalls and a funfair. It has attracted crowds of over 50,000 people. The strategy also created a creative industries hub in Old Town, ensured that public art is included in developments such as College Green and Ruskin Square and investigated the possibility of gallery space in the Cultural Quarter.
Fairfield Halls, Arnhem Gallery and the Ashcroft Theatre show productions that are held throughout the year such as drama, ballet, opera and pantomimes and can be converted to show films. It also contains the Arnhem Gallery civic hall and an art gallery. Other cultural activities, including shopping and exhibitions, are Surrey Street Market which is mainly a meat and vegetables market near the main shopping environment of Croydon. The market has a Royal Charter dating back to 1276. Airport House is a newly refurbished conference and exhibition centre inside part of Croydon Airport. The Whitgift Centre is the current main shopping centre in the borough. Centrale is a new shopping centre that houses many more familiar names, as well as Croydon's House of Fraser.
There are three local newspapers which operate within the borough. The Croydon Advertiser began life in 1869, and was in 2005 the third-best selling paid-for weekly newspaper in London. The Advertiser is Croydon's major paid-for weekly paper and is on sale every Friday in five geographical editions: Croydon; Sutton & Epsom; Coulsdon & Purley; New Addington; and Caterham. The paper converted from a broadsheet to a compact (tabloid) format on 31 March 2006. It was bought by Northcliffe Media which is part of the Daily Mail and General Trust group on 6 July 2007. The Croydon Post is a free newspaper available across the borough and is operated by the Advertiser group. The circulation of the newspaper was in 2008 more than the main title published by the Advertiser Group.
The Croydon Guardian is another local weekly paper, which is paid for at newsagents but free at Croydon Council libraries and via deliveries. It is one of the best circulated local newspapers in London and once had the highest circulation in Croydon with around one thousand more copies distributed than The Post.
The borough is served by the London regional versions of BBC and ITV coverage, from either the Crystal Palace or Croydon transmitters.
Croydon Television is owned by Croydon broadcasting corporation. Broadcasting from studios in Croydon, the CBC is fully independent. It does not receive any government or local council grants or funding and is supported by donations, sponsorship and by commercial advertising.
Capital Radio and Gold serve the borough. Local BBC radio is provided by BBC London 94.9. Other stations include Kiss 100, Absolute Radio and Magic 105.4 FM from Bauer Radio and Capital Xtra, Heart 106.2 and Smooth Radio from Global Radio. In 2012, Croydon Radio, an online and FM radio station, and the first official FM radio station for the London Borough of Croydon, began serving the area. The borough is also home to its own local TV station, Croydon TV.
The London Borough of Croydon is twinned with the municipality of Arnhem which is located in the east of the Netherlands. The city of Arnhem is one of the 20 largest cities in the Netherlands. They have been twinned since 1946 after both towns had suffered extensive bomb damage during the recently ended war. There is also a Guyanese link supported by the council.
In September 2009 it was revealed that Croydon Council had around £20m of its pension fund for employees invested in shares in Imperial Tobacco and British American Tobacco. Members of the opposition Labour group on the council, who had banned such shareholdings when in control, described this as "dealing in death" and inconsistent with the council's tobacco control strategy.
The following people and military units have received the Freedom of the Borough of Croydon. | [
{
"paragraph_id": 0,
"text": "The London Borough of Croydon (pronunciation) is a London borough in south London, part of Outer London. It covers an area of 87 km (33.6 sq mi). It is the southernmost borough of London. At its centre is the historic town of Croydon from which the borough takes its name; while other urban centres include Coulsdon, Purley, South Norwood, Norbury, New Addington, Selsdon and Thornton Heath. Croydon is mentioned in Domesday Book, and from a small market town has expanded into one of the most populous areas on the fringe of London. The borough is now one of London's leading business, financial and cultural centres, and its influence in entertainment and the arts contribute to its status as a major metropolitan centre. Its population is 390,719, making it the largest London borough and sixteenth largest English district.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The borough was formed in 1965 from the merger of the County Borough of Croydon with Coulsdon and Purley Urban District, both of which had been within Surrey. The local authority, Croydon London Borough Council, is now part of London Councils, the local government association for Greater London. The economic strength of Croydon dates back mainly to Croydon Airport which was a major factor in the development of Croydon as a business centre. Once London's main airport for all international flights to and from the capital, it was closed on 30 September 1959 due to the lack of expansion space needed for an airport to serve the growing city. It is now a Grade II listed building and tourist attraction. Croydon Council and its predecessor Croydon Corporation unsuccessfully applied for city status in 1954, 2000, 2002 and 2012. The area is currently going through a large regeneration project called Croydon Vision 2020 which is predicted to attract more businesses and tourists to the area as well as backing Croydon's bid to become \"London's Third City\" (after the City of London and Westminster). Croydon is mostly urban, though there are large suburban and rural uplands towards the south of the borough. Since 2003, Croydon has been certified as a Fairtrade borough by the Fairtrade Foundation. It was the first London borough to have Fairtrade status which is awarded on certain criteria.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The area is one of the hearts of culture in London and the South East of England. Institutions such as the major arts and entertainment centre Fairfield Halls add to the vibrancy of the borough. However, its famous fringe theatre, the Warehouse Theatre, went into administration in 2012 when the council withdrew funding, and the building itself was demolished in 2013. The Croydon Clocktower was opened by Queen Elizabeth II in 1994 as an arts venue featuring a library, the independent David Lean Cinema (closed by the council in 2011 after sixteen years of operating, but now partially reopened on a part-time and volunteer basis) and museum. From 2000 to 2010, Croydon staged an annual summer festival celebrating the area's black and Indian cultural diversity, with audiences reaching over 50,000 people.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Premier League football club Crystal Palace F.C. play at Selhurst Park in Selhurst, a stadium they have been based in since 1924. Other landmarks in the borough include Addington Palace, an eighteenth-century mansion which became the official second residence of six Archbishops of Canterbury, Shirley Windmill, one of the few surviving large windmills in Greater London built in the 1850s, and the BRIT School, a creative arts institute run by the BRIT Trust which has produced artists such as Adele, Amy Winehouse and Leona Lewis.",
"title": ""
},
{
"paragraph_id": 4,
"text": "The London Borough of Croydon was formed in 1965 from the Coulsdon and Purley Urban District and the County Borough of Croydon. The name Croydon comes from Crogdene or Croindone, named by the Saxons in the 8th century when they settled here, although the area had been inhabited since prehistoric times. It is thought to derive from the Anglo-Saxon croeas deanas, meaning \"the valley of the crocuses\", indicating that, like Saffron Walden in Essex, it was a centre for the collection of saffron.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "By the time of the Norman invasion Croydon had a church, a mill and around 365 inhabitants as recorded in the Domesday Book. The Archbishop of Canterbury, Archbishop Lanfranc lived at Croydon Palace which still stands. Visitors included Thomas Becket (another Archbishop), and royal figures such as Henry VIII of England and Elizabeth I. The royal charter for Surrey Street Market dates back to 1276,",
"title": "History"
},
{
"paragraph_id": 6,
"text": "Croydon carried on through the ages as a prosperous market town, they produced charcoal, tanned leather, and ventured into brewing. Croydon was served by the Surrey Iron Railway, the first public railway (horse drawn) in the world, in 1803, and by the London to Brighton rail link in the mid-19th century, helping it to become the largest town in what was then Surrey.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "In the 20th century Croydon became known for industries such as metal working, car manufacture and its aerodrome, Croydon Airport. Starting out during World War I as an airfield for protection against Zeppelins, an adjacent airfield was combined, and the new aerodrome opened on 29 March 1920. It became the largest in London, and was the main terminal for international air freight into the capital. It developed into one of the great airports of the world during the 1920s and 1930s, and welcomed the world's pioneer aviators in its heyday. British Airways Ltd used the airport for a short period after redirecting from Northolt Aerodrome, and Croydon was the operating base for Imperial Airways. It was partly due to the airport that Croydon suffered heavy bomb damage during World War II. As aviation technology progressed, however, and aircraft became larger and more numerous, it was recognised in 1952 that the airport would be too small to cope with the ever-increasing volume of air traffic. The last scheduled flight departed on 30 September 1959. It was superseded as the main airport by both London Heathrow and London Gatwick Airport (see below). The air terminal, now known as Airport House, has been restored, and has a hotel and museum in it.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "In the late 1950s and through the 1960s the council commercialised the centre of Croydon with massive development of office blocks and the Whitgift Centre which was formerly the biggest in-town shopping centre in Europe. The centre was officially opened in October 1970 by the Duchess of Kent. The original Whitgift School there had moved to Haling Park, South Croydon in the 1930s; the replacement school on the site, Whitgift Middle School, now the Trinity School of John Whitgift, moved to Shirley Park in the 1960s, when the buildings were demolished.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "The borough council unsuccessfully applied for city status in 1965, 2000 and again in 2002. If it had been successful, it would have been the third local authority in Greater London to hold that status, along with the City of London and the City of Westminster. At present the London Borough of Croydon is the second most populous local government district of England without city status, Kirklees being the first. Croydon's applications were refused as it was felt not to have an identity separate from the rest of Greater London. In 1965 it was described as \"...now just part of the London conurbation and almost indistinguishable from many of the other Greater London boroughs\" and in 2000 as having \"no particular identity of its own\".",
"title": "History"
},
{
"paragraph_id": 10,
"text": "Croydon, in common with many other areas, was hit by extensive rioting in August 2011. Reeves, an historic furniture store established in 1867, that gave its name to a junction and tram stop in the town centre, was destroyed by arson.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "Croydon is currently going through a vigorous regeneration plan, called Croydon Vision 2020. This will change the urban planning of central Croydon completely. Its main aims are to make Croydon London's Third City and the hub of retail, business, culture and living in south London and South East England. The plan was showcased in a series of events called Croydon Expo. It was aimed at business and residents in the London Borough of Croydon, to demonstrate the £3.5bn development projects the Council wishes to see in Croydon in the next ten years.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "There have also been exhibitions for regional districts of Croydon, including Waddon, South Norwood and Woodside, Purley, New Addington and Coulsdon. Examples of upcoming architecture featured in the expo can easily be found to the centre of the borough, in the form of the Croydon Gateway site and the Cherry Orchard Road Towers.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "Croydon London Borough Council has seventy councillors elected in 28 wards.",
"title": "Governance"
},
{
"paragraph_id": 14,
"text": "From the borough's creation in 1965 until 1994 the council saw continuous control under first Conservatives and Residents' Ratepayers councillors up to 1986 and then Conservatives. From 1994 to 2006 Labour Party councillors controlled the council. After a further eight-year period of Conservative control the Labour group secured a ten-seat majority in the local council elections on 22 May 2014. Councillor Tony Newman returned to lead the council for Labour. Labour remained in power until the 2022 election where no party had overall control. However, the party holding the executive Mayor, and as a result executive power, is the Conservative Party. Since the 2022 Croydon London Borough Council election the composition of the council is as follows:",
"title": "Governance"
},
{
"paragraph_id": 15,
"text": "A campaign group supporting an elected mayor for Croydon called DEMOC started a petition in February 2020, which they submitted to the council in September 2020. The mayoral system would replace the leader-and-cabinet system, whereby the leader of the council is chosen by the majority party or coalition of parties. The referendum was held in October 2021, resulting in a majority in favour of the mayoral system, with more than 80% of valid votes being cast in favour of the change.",
"title": "Governance"
},
{
"paragraph_id": 16,
"text": "The first elected mayor is the Conservative, Jason Perry. elected on 9 May 2022. The Deputy Mayor is Cllr. Lynne Hale. The Chief Executive since 14 September 2020 has been Katherine Kerswell.",
"title": "Governance"
},
{
"paragraph_id": 17,
"text": "The borough is covered by three parliamentary constituencies: these are Croydon North, Croydon Central and Croydon South.",
"title": "Governance"
},
{
"paragraph_id": 18,
"text": "For much of its history, Croydon Council was controlled by the Conservative Party or independents. Former Croydon councillors include former MPs Andrew Pelling, Vivian Bendall, David Congdon, Geraint Davies and Reg Prentice, London Assembly member Valerie Shawcross, Lord Bowness, John Donaldson, Baron Donaldson of Lymington (Master of the Rolls) and H.T. Muggeridge, MP and father of Malcolm Muggeridge. The first Mayor of the newly created county borough was Jabez Balfour, later a disgraced Member of Parliament. Former Conservative Director of Campaigning, Gavin Barwell, was a Croydon councillor between 1998 and 2010 and was the MP for Croydon Central from 2010 until 2017. Sarah Jones (politician) won the Croydon Central seat for Labour in 2017. Croydon North has a Labour MP, Steve Reed (politician), and Croydon South has a Conservative MP, Chris Philp.",
"title": "Governance"
},
{
"paragraph_id": 19,
"text": "Some 10,000 people work directly or indirectly for the council, at its main offices at Bernard Weatherill House or in its schools, care homes, housing offices or work depots.",
"title": "Governance"
},
{
"paragraph_id": 20,
"text": "Croydon Town Hall on Katharine Street in Central Croydon houses the committee rooms, the mayor's and other councillors' offices, electoral services and the arts and heritage services. The present Town Hall is Croydon's third. The first town hall is thought to have been built in either 1566 or 1609. The second was built in 1808 to serve the growing town but was demolished after the present town hall was erected in 1895.",
"title": "Governance"
},
{
"paragraph_id": 21,
"text": "The 1808 building cost £8,000, which was regarded as an enormous sum for those days and was perhaps as controversial as the administrative building Bernard Weatherill House opened for occupation in 2013 and reputed to have cost £220,000,000. The early 19th century building was known initially as \"Courthouse\" as, like its predecessor and successor, the local court met there. The building stood on the western side of the High Street near to the junction with Surrey Street, the location of the town's market. The building became inadequate for the growing local administrative responsibilities and stood at a narrow point of a High Street in need of widening.",
"title": "Governance"
},
{
"paragraph_id": 22,
"text": "The present town hall was designed by local architect Charles Henman and was officially opened by the Prince and Princess of Wales on 19 May 1896. It was constructed in red brick, sourced from Wrotham in Kent, with Portland stone dressings and green Westmoreland slates for the roof. It also housed the court and most central council employees.",
"title": "Governance"
},
{
"paragraph_id": 23,
"text": "The Borough's incorporation in 1883 and a desire to improve central Croydon with improvements to traffic flows and the removal of social deprivation in Middle Row prompted the move to a new configuration of town hall provision. The second closure of the Central Railway Station provided the corporation with the opportunity to buy the station land from the London, Brighton and South Coast Railway Company for £11,500 to provide the site for the new town hall. Indeed, the council hoped to be able to sell on some of the land purchased with enough for municipal needs and still \"leave a considerable margin of land which might be disposed of\". The purchase of the failed railway station came despite local leaders having successfully urged the re-opening of the poorly patronised railway station. The railway station re-opening had failed to be a success so freeing up the land for alternative use.",
"title": "Governance"
},
{
"paragraph_id": 24,
"text": "Parts, including the former court rooms, have been converted into the Museum of Croydon and exhibition galleries. The original public library was converted into the David Lean Cinema, part of the Croydon Clocktower. The Braithwaite Hall is used for events and performances. The town hall was renovated in the mid-1990s and the imposing central staircase, long closed to the public and kept for councillors only, was re-opened in 1994. The civic complex, meanwhile, was substantially added to, with buildings across Mint Walk and the 19-floor Taberner House to house the rapidly expanding corporation's employees.",
"title": "Governance"
},
{
"paragraph_id": 25,
"text": "Ruskin House is the headquarters of Croydon's Labour, Trade Union and Co-operative movements and is itself a co-operative with shareholders from organisations across the three movements. In the 19th century, Croydon was a bustling commercial centre of London. It was said that, at the turn of the 20th century, approximately £10,000 was spent in Croydon's taverns and inns every week. For the early labour movement, then, it was natural to meet in the town's public houses, in this environment. However, the temperance movement was equally strong, and Georgina King Lewis, a keen member of the Croydon United Temperance Council, took it upon herself to establish a dry centre for the labour movement. The first Ruskin House was highly successful, and there has been two more since. The current house was officially opened in 1967 by the then Labour Prime Minister, Harold Wilson. Today, Ruskin House continues to serve as the headquarters of the Trade Union, Labour and Co-operative movements in Croydon, hosting a range of meetings and being the base for several labour movement groups. Office tenants include the headquarters of the Communist Party of Britain and Croydon Labour Party. Geraint Davies, the MP for Croydon Central, had offices in the building, until he was defeated by Andrew Pelling and is now the Labour representative standing for Swansea West in Wales.",
"title": "Governance"
},
{
"paragraph_id": 26,
"text": "Taberner House was built between 1964 and 1967, designed by architect H. Thornley, with Allan Holt and Hugh Lea as borough engineers. Although the council had needed extra space since the 1920s, it was only with the imminent creation of the London Borough of Croydon that action was taken. The building, being demolished in 2014, was in classic 1960s style, praised at the time but subsequently much derided. It has its elegant upper slab block narrowing towards both ends, a formal device which has been compared to the famous Pirelli Tower in Milan. It was named after Ernest Taberner OBE, Town Clerk from 1937 to 1963. Until September 2013, Taberner House housed most of the council's central employees and was the main location for the public to access information and services, particularly with respect to housing.",
"title": "Governance"
},
{
"paragraph_id": 27,
"text": "In September 2013, Council staff moved into Bernard Weatherill House in Fell Road, (named after the former Speaker of the House and Member of Parliament for Croydon North-East). Staff from the Met Police, NHS, Jobcentre Plus, Croydon Credit Union, Citizens Advice Bureau as well as 75 services from the council all moved to the new building.",
"title": "Governance"
},
{
"paragraph_id": 28,
"text": "For elections to the Greater London Council, the borough formed the Croydon electoral division, electing four members. In 1973 it was divided into the single-member Croydon Central, Croydon North East, Croydon North West and Croydon South electoral divisions. The Greater London Council was abolished in 1986.",
"title": "Governance"
},
{
"paragraph_id": 29,
"text": "Since 2000, for elections to the London Assembly, the borough forms part of the Croydon and Sutton constituency.",
"title": "Governance"
},
{
"paragraph_id": 30,
"text": "Private Eye magazine has named Croydon the most rotten borough in Britain six years in a row (2017–2022).",
"title": "Governance"
},
{
"paragraph_id": 31,
"text": "The borough is in the far south of London, with the M25 orbital motorway stretching to the south of it, between Croydon and Tandridge. To the north and east, the borough mainly borders the London Borough of Bromley, and in the north west the boroughs of Lambeth and Southwark. The boroughs of Sutton and Merton are located directly to the west. It is at the head of the River Wandle, just to the north of a significant gap in the North Downs. It lies 10 miles (16 km) south of Central London, and the earliest settlement may have been a Roman staging post on the London-Portslade road, although conclusive evidence has not yet been found. The main town centre houses a great variety of well-known stores on North End and two shopping centres. It was pedestrianised in 1989 to attract people back to the town centre. Another shopping centre called Park Place was due to open in 2012 but has since been scrapped.",
"title": "Geography and climate"
},
{
"paragraph_id": 32,
"text": "The CR postcode area covers most of the south and centre of the borough while the SE and SW postcodes cover the northern parts, including Crystal Palace, Upper Norwood, South Norwood, Selhurst (part), Thornton Heath (part), Norbury and Pollards Hill (part).",
"title": "Geography and climate"
},
{
"paragraph_id": 33,
"text": "Districts in the London Borough of Croydon include Addington, a village to the east of Croydon which until 2000 was poorly linked to the rest of the borough as it was without any railway or light rail stations, with only a few patchy bus services. Addiscombe is a district just northeast of the centre of Croydon, and is popular with commuters to central London as it is close to the busy East Croydon station. Ashburton, to the northeast of Croydon, is mostly home to residential houses and flats, being named after Ashburton House, one of the three big houses in the Addiscombe area. Broad Green is a small district, centred on a large green with many homes and local shops in West Croydon. Coombe is an area, just east of Croydon, which has barely been urbanised and has retained its collection of large houses fairly intact. Coulsdon, south west of Central Croydon, which has retained a good mix of traditional high street shops as well as a large number of restaurants for its size. Croydon is the principal area of the borough, Crystal Palace is an area north of Croydon, which is shared with the London Boroughs of Lambeth, Southwark, Lewisham and Bromley. Fairfield, just northeast of Croydon, holds the Fairfield Halls and the village of Forestdale, to the east of Croydon's main area, commenced work in the late 1960s and completed in the mid-70s to create a larger town on what was previously open ground. Hamsey Green is a place on the plateau of the North Downs, south of Croydon. Kenley, again south of the centre, lie within the London Green Belt and features a landscape dominated by green space. New Addington, to the east, is a large local council estate surrounded by open countryside and golf courses. Norbury, to the northwest, is a suburb with a large ethnic population. Norwood New Town is a part of the Norwood triangle, to the north of Croydon. Monks Orchard is a small district made up of large houses and open space in the northeast of the borough. Pollards Hill is a residential district with houses on roads, which are lined with pollarded lime trees, stretching to Norbury. Purley, to the south, is a main town whose name derives from \"pirlea\", which means 'Peartree lea'. Sanderstead, to the south, is a village mainly on high ground at the edge of suburban development in Greater London. Selhurst is a town, to the north of Croydon, which holds the nationally known school, The BRIT School. Selsdon is a suburb which was developed during the inter-war period in the 1920s and 1930s, and is remarkable for its many Art Deco houses, to the southeast of Croydon Centre. Shirley, is to the east of Croydon, and holds Shirley Windmill. South Croydon, to the south of Croydon, is a locality which holds local landmarks such as The Swan and Sugarloaf public house and independent Whitgift School part of the Whitgift Foundation. South Norwood, to the north, is in common with West Norwood and Upper Norwood, named after a contraction of Great North Wood and has a population of around 14,590. Thornton Heath is a town, to the northwest of Croydon, which holds Croydon's principal hospital Mayday. Upper Norwood is north of Croydon, on a mainly elevated area of the borough. Waddon is a residential area, mainly based on the Purley Way retail area, to the west of the borough. Woodside is located to the northeast of the borough, with streets based on Woodside Green, a small sized area of green land. And finally Whyteleafe is a town, right to the edge of Croydon with some areas in the Surrey district of Tandridge.",
"title": "Geography and climate"
},
{
"paragraph_id": 34,
"text": "Croydon is a gateway to the south from central London, with some major roads running through it. Purley Way, part of the A23, was built to by-pass Croydon town centre. It is one of the busiest roads in the borough, and is the site of several major retail developments including one of only 18 IKEA stores in the country, built on the site of the former power station. The A23 continues southward as Brighton Road, which is the main route running towards the south from Croydon to Purley. The centre of Croydon is very congested, and the urban planning has since become out of date and quite inadequate, due to the expansion of Croydon's main shopping area and office blocks. Wellesley Road is a north–south dual carriageway that cuts through the centre of the town, and makes it hard to walk between the town centre's two railway stations. Croydon Vision 2020 includes a plan for a more pedestrian-friendly replacement. It has also been named as one of the worst roads for cyclists in the area. Construction of the Croydon Underpass beneath the junction of George Street and Wellesley Road/Park Lane started in the early 1960s, mainly to alleviate traffic congestion on Park Lane, above the underpass. The Croydon Flyover is also near the underpass, and next to Taberner House. It mainly leads traffic on to Duppas Hill, towards Purley Way with links to Sutton and Kingston upon Thames. The major junction on the flyover is for Old Town, which is also a large three-lane road.",
"title": "Geography and climate"
},
{
"paragraph_id": 35,
"text": "Croydon covers an area of 86.52 km. Croydon's physical features consist of many hills and rivers that are spread out across the borough and into the North Downs, Surrey and the rest of south London. Addington Hills is a major hilly area to the south of London and is recognised as a significant obstacle to the growth of London from its origins as a port on the north side of the river, to a large circular city. The Great North Wood is a former natural oak forest that covered the Sydenham Ridge and the southern reaches of the River Effra and its tributaries.",
"title": "Geography and climate"
},
{
"paragraph_id": 36,
"text": "The most notable tree, called Vicar's Oak, marked the boundary of four ancient parishes; Lambeth, Camberwell, Croydon and Bromley. John Aubrey referred to this \"ancient remarkable tree\" in the past tense as early as 1718, but according to JB Wilson, the Vicar's Oak survived until 1825. The River Wandle is also a major tributary of the River Thames, where it stretches to Wandsworth and Putney for 9 miles (14 km) from its main source in Waddon.",
"title": "Geography and climate"
},
{
"paragraph_id": 37,
"text": "Croydon has a temperate climate in common with most areas of Great Britain: its Köppen climate classification is Cfb. Its mean annual temperature of 9.6 °C is similar to that experienced throughout the Weald, and slightly cooler than nearby areas such as the Sussex coast and central London. Rainfall is considerably below England's average (1971–2000) level of 838 mm, and every month is drier overall than the England average.",
"title": "Geography and climate"
},
{
"paragraph_id": 38,
"text": "The nearest weather station is at Gatwick Airport.",
"title": "Geography and climate"
},
{
"paragraph_id": 39,
"text": "The skyline of Croydon has significantly changed over the past 50 years. High rise buildings, mainly office blocks, now dominate the skyline. The most notable of these buildings include Croydon Council's headquarters Taberner House, which has been compared to the famous Pirelli Tower of Milan, and the Nestlé Tower, the former UK headquarters of Nestlé.",
"title": "Geography and climate"
},
{
"paragraph_id": 40,
"text": "In recent years, the development of tall buildings, such as the approved Croydon Vocational Tower and Wellesley Square, has been encouraged in the London Plan, and will lead to the erection of new skyscrapers in the coming years as part of London's high-rise boom.",
"title": "Geography and climate"
},
{
"paragraph_id": 41,
"text": "No. 1 Croydon, formerly the NLA Tower, Britain's 88th tallest tower, close to East Croydon station, is an example of 1970s architecture. The tower was originally nicknamed the Threepenny bit building, as it resembles a stack of pre-decimalisation Threepence coins, which were 12-sided. It is now most commonly called The Octagon, being 8-sided.",
"title": "Geography and climate"
},
{
"paragraph_id": 42,
"text": "Lunar House is another high-rise building. Like other government office buildings on Wellesley Road, such as Apollo House, the name of the building was inspired by the US Moon landings (In the Croydon suburb of New Addington there is a public house, built during the same period, called The Man on the Moon). Lunar House houses the Home Office building for Visas and Immigration. Apollo House houses The Border Patrol Agency.",
"title": "Geography and climate"
},
{
"paragraph_id": 43,
"text": "A new generation of buildings are being considered by the council as part of Croydon Vision 2020, so that the borough doesn't lose its title of having the \"largest office space in the south east\", excluding central London. Projects such as Wellesley Square, which will be a mix of residential and retail with an eye-catching colour design and 100 George Street a proposed modern office block are incorporated in this vision.",
"title": "Geography and climate"
},
{
"paragraph_id": 44,
"text": "Notable events that have happened to Croydon's skyline include the Millennium project to create the largest single urban lighting project ever. It was created for the buildings of Croydon to illuminate them for the third millennium. The project provided new lighting for the buildings, and provided an opportunity to project images and words onto them, mixing art and poetry with coloured light, and also displaying public information after dark. Apart from increasing night time activity in Croydon and thereby reducing the fear of crime, it helped to promote the sustainable use of older buildings by displaying them in a more positive way.",
"title": "Geography and climate"
},
{
"paragraph_id": 45,
"text": "There are a large number of attractions and places of interest all across the borough of Croydon, ranging from historic sites in the north and south to modern towers in the centre.",
"title": "Geography and climate"
},
{
"paragraph_id": 46,
"text": "Croydon Airport was once London's main airport, but closed on 30 September 1959 due to the expansion of London and because it didn't have room to grow; so Heathrow International Airport took over as London's main airport. It has now been mostly converted to offices, although some important elements of the airport remain. It is a tourist attraction.",
"title": "Geography and climate"
},
{
"paragraph_id": 47,
"text": "The Croydon Clocktower arts venue was opened by Elizabeth II in 1994. It includes the Braithwaite Hall (the former reference library - named after the Rev. Braithwaite who donated it to the town) for live events, David Lean Cinema (built in memory of David Lean), the Museum of Croydon and Croydon Central Library. The Museum of Croydon (formerly known as Croydon Lifetimes Museum) highlights Croydon in the past and the present and currently features high-profile exhibitions including the Riesco Collection, The Art of Dr Seuss and the Whatever the Weather gallery. Shirley Windmill is a working windmill and one of the few surviving large windmills in Surrey, built in 1854. It is Grade II listed and received a £218,100 grant from the Heritage Lottery Fund. Addington Palace is an 18th-century mansion in Addington which was originally built as Addington Place in the 16th century. The palace became the official second residence of six archbishops, five of whom are buried in St Mary's Church and churchyard nearby.",
"title": "Geography and climate"
},
{
"paragraph_id": 48,
"text": "North End is the main pedestrianised shopping road in Croydon, having Centrale to one side and the Whitgift Centre to the other. The Warehouse Theatre is a popular theatre for mostly young performers and is due to get a face-lift on the Croydon Gateway site.",
"title": "Geography and climate"
},
{
"paragraph_id": 49,
"text": "The Nestlé Tower was the UK headquarters of Nestlé and is one of the tallest towers in England, which is due to be re-fitted during the Park Place development. The Fairfield Halls is a well known concert hall and exhibition centre, opened in 1962. It is frequently used for BBC recordings and was formerly the home of ITV's World of Sport. It includes the Ashcroft Theatre and the Arnhem Gallery.",
"title": "Geography and climate"
},
{
"paragraph_id": 50,
"text": "Croydon Palace was the summer residence of the Archbishop of Canterbury for over 500 years and included regular visitors such as Henry III and Queen Elizabeth I. It is thought to have been built around 960. Croydon Cemetery is a large cemetery and crematorium west of Croydon and is most famous for the gravestone of Derek Bentley, who was wrongly hanged in 1953. Mitcham Common is an area of common land partly shared with the boroughs of Sutton and Merton. Almost 500,000 years ago, Mitcham Common formed part of the river bed of the River Thames.",
"title": "Geography and climate"
},
{
"paragraph_id": 51,
"text": "The BRIT School is a performing Arts & Technology school, owned by the BRIT Trust (known for the BRIT Awards Music Ceremony). Famous former students include Kellie Shirley, Amy Winehouse, Leona Lewis, Adele, Kate Nash, Dane Bowers, Katie Melua and Lyndon David-Hall. Grants is an entertainment venue in the centre of Croydon which includes a Vue cinema.",
"title": "Geography and climate"
},
{
"paragraph_id": 52,
"text": "Surrey Street Market has roots in the 13th century, or earlier, and was chartered by the Archbishop of Canterbury in 1276. The market is regularly used as a location for TV, film and advertising. Croydon Minster, formerly the parish church, was established in the Anglo-Saxon period, and parts of the surviving building (notably the tower) date from the 14th and 15th centuries. However, the church was largely destroyed by fire in 1867, so the present structure is a rebuild of 1867–69 to the designs of George Gilbert Scott. It is the burial place of six archbishops, and contains monuments to Archbishops Sheldon and Whitgift.",
"title": "Geography and climate"
},
{
"paragraph_id": 53,
"text": "The table shows population change since 1801, including the percentage change since previous census. Although the London Borough of Croydon has existed only since 1965, earlier figures have been generated by combining data from the towns, villages, and civil parishes that would later be absorbed into the authority.",
"title": "Demography"
},
{
"paragraph_id": 54,
"text": "According to the 2011 census, Croydon had a population of 363,378, making Croydon the most populated borough in Greater London. The estimated population in 2017 was around 384,800. 186,900 were males, with 197,900 females. The density was 4,448 inhabitants per km. 248,200 residents of Croydon were between the age of 16 and 64.",
"title": "Demography"
},
{
"paragraph_id": 55,
"text": "In 2011, white was the majority ethnicity with 55.1%. Black was the second-largest ethnicity with 20.2%; 16.4% were Asian and 8.3% stated to be something other.",
"title": "Demography"
},
{
"paragraph_id": 56,
"text": "The most common householder type were owner occupied with only a small percentage rented. Many new housing schemes and developments are currently taking place in Croydon, such as The Exchange and Bridge House, IYLO, Wellesley Square (now known as Saffron Square) and Altitude 25. In 2006, The Metropolitan Police recorded a 10% fall in the number of crimes committed in Croydon, better than the rate which crime in London as a whole is falling. Croydon has had the highest fall in the number of cases of violence against the person in south London, and is one of the top 10 safest local authorities in London. According to Your Croydon (a local community magazine) this is due to a stronger partnership struck between Croydon Council and the police. In 2007, overall crime figures across the borough saw decrease of 5%, with the number of incidents decreasing from 32,506 in 2006 to 30,862 in 2007. However, in the year ending April 2012, The Metropolitan Police recorded the highest rates for murder and rape throughout London in Croydon, accounting for almost 10% of all murders, and 7% of all rapes. Croydon has five police stations. Croydon police station is on Park Lane in the centre of the town near the Fairfield Halls; South Norwood police station is a newly refurbished building just off the High Street; Norbury police station is on London Road; Kenley station is on Godstone Road; and New Addington police station is on Addington Village road.",
"title": "Demography"
},
{
"paragraph_id": 57,
"text": "The predominant religion of the borough is Christianity. According to the 2021 United Kingdom census, the borough has over 190,880 Christians, mainly Protestants. This is the largest religious following in the borough followed by Islam with 40,717 Muslims resident.",
"title": "Demography"
},
{
"paragraph_id": 58,
"text": "101,119 Croydon residents stated that they are atheist or non-religious in the 2021 Census.",
"title": "Demography"
},
{
"paragraph_id": 59,
"text": "Croydon Minster is the most notable of the borough's 35 churches. This church was founded in Saxon times, since there is a record of \"a priest of Croydon\" in 960, although the first record of a church building is in the Domesday Book (1086). In its final medieval form, the church was mainly a Perpendicular-style structure, but this was severely damaged by fire in 1867, following which only the tower, south porch and outer walls remained. Under the direction of Sir George Gilbert Scott the church was rebuilt, incorporating the remains and essentially following the design of the medieval building, and was reconsecrated in 1870. It still contains several important monuments and fittings saved from the old church.",
"title": "Demography"
},
{
"paragraph_id": 60,
"text": "The Area Bishop of Croydon is a position as a suffragan Bishop in the Anglican Diocese of Southwark. The present bishop is the Right Reverend Jonathan Clark.",
"title": "Demography"
},
{
"paragraph_id": 61,
"text": "The main employment sectors of the Borough is retail and enterprise which is mainly based in Central Croydon. Major employers are well-known companies, who hold stores or offices in the town. Purley Way is a major employer of people, looking for jobs as sales assistants, sales consultants and store managerial jobs. IKEA Croydon, when it was built in 1992, brought many non-skilled jobs to Croydon. The store, which is a total size of 23,000 m, took over the former site of Croydon Power station, which had led to the unemployment of many skilled workers. In May 2006, the extension of the IKEA made it the fifth biggest employer in Croydon, and includes the extension of the showroom, market hall and self-serve areas.",
"title": "Economy"
},
{
"paragraph_id": 62,
"text": "Other big employers around Purley include the large Tesco Extra store in the town centre, along with other stores in Purley Way including Sainsbury's, B&Q and Vue. Croydon town centre is also a major retail centre, and home to many high street and department stores as well as designer boutiques. The main town centre shopping areas are on the North End precinct, in the Whitgift Centre, Centrale and St George's Walk. Department stores in Croydon town centre include House of Fraser, Marks and Spencer, Allders, Debenhams and T.K. Maxx. Croydon's main market is Surrey Street Market, which has a royal charter dating back to 1276. Shopping areas outside the town centre include the Valley Park retail complex, Croydon Colonnades, Croydon Fiveways, and the Waddon Goods Park.",
"title": "Economy"
},
{
"paragraph_id": 63,
"text": "In research from 2010 on retail footprint, Croydon came out as 29th in terms of retail expenditure at £770 million. This puts it 6th in the Greater London area, falling behind Kingston upon Thames and Westfield London. In 2005, Croydon came 21st, second in London behind the West End, with £909 million, whilst Kingston was 24th with £864 million. In a 2004 survey on the top retail destinations, Croydon was 27th.",
"title": "Economy"
},
{
"paragraph_id": 64,
"text": "In 2007, Croydon leapt up the annual business growth league table, with a 14% rise in new firms trading in the borough after 125 new companies started up, increasing the number from 900 to 1,025, enabling the town, which has also won the Enterprising Britain Award and \"the most enterprising borough in London\" award, to jump from 31 to 14 in the table.",
"title": "Economy"
},
{
"paragraph_id": 65,
"text": "Croydon is home to a variety of international business communities, each with dynamic business networks, so businesses located in Croydon are in a good position to make the most of international trade and recruit from a labour force fluent in 130 languages.",
"title": "Economy"
},
{
"paragraph_id": 66,
"text": "Tramlink created many jobs when it opened in 2000, not only drivers but engineers as well. Many of the people involved came from Croydon, which was the original hub of the system. Retail stores inside both Centrale and the Whitgift Centre as well as on North End employee people regularly and create many jobs, especially at Christmas. As well as the new building of Park Place, which will create yet more jobs, so will the regeneration of Croydon, called Croydon Vision 2020, highlighted in the Croydon Expo which includes the Croydon Gateway, Wellesley Square, Central One plus much more.",
"title": "Economy"
},
{
"paragraph_id": 67,
"text": "Croydon is a major office area in the south east of England, being the largest outside of central London. Many powerful companies based in Europe and worldwide have European or British headquarters in the town. American International Group (AIG) have offices in No. 1 Croydon, formerly the NLA Tower, shared with Liberata, Pegasus and the Institute of Public Finance. AIG is the sixth-largest company in the world according to the 2007 Forbes Global 2000 list. The Swiss company Nestlé has its UK headquarters in the Nestlé Tower, on the site of the formerly proposed Park Place shopping centre. Real Digital International has developed a purpose built 70,000 sq ft (6,500 m) factory on Purley Way equipped with \"the most sophisticated production equipment and technical solutions\". ntl:Telewest, now Virgin Media, have offices at Communications House, from the Telewest side when it was known as Croydon Cable.",
"title": "Economy"
},
{
"paragraph_id": 68,
"text": "The Home Office UK Visas and Immigration department has its headquarters in Lunar House in Central Croydon. In 1981, Superdrug opened a 11,148 m (120,000 sq ft) distribution centre and office complex at Beddington Lane. The head office of international engineering and management consultant Mott MacDonald is located in Mott MacDonald House on Sydenham Road, one of four offices they occupy in the town centre. BT has large offices in Prospect East in Central Croydon. The Royal Bank of Scotland also has large offices in Purley, south of Croydon. Direct Line also has an office opposite Taberner House. Other companies with offices in Croydon include Lloyds TSB, Merrill Lynch and Balfour Beatty. Ann Summers used to have its headquarters in the borough but has moved to the Wapses Lodge Roundabout in Tandridge.",
"title": "Economy"
},
{
"paragraph_id": 69,
"text": "The Council declared bankruptcy via a section 114 notice in December 2020.",
"title": "Economy"
},
{
"paragraph_id": 70,
"text": "East Croydon and West Croydon are the main stations in the borough. South Croydon railway station is also a railway station in Croydon, but it is lesser known.",
"title": "Transport"
},
{
"paragraph_id": 71,
"text": "East Croydon is served by Govia Thameslink Railway, operating under the Southern and Thameslink brands. Services travel via the Brighton Main Line north to London Victoria, London Bridge, London St Pancras, Luton Airport, Bedford, Cambridge and Peterborough and south to Gatwick Airport, Ore, Brighton, Littlehampton, Bognor Regis, Southampton and Portsmouth. East Croydon is the largest and busiest station in Croydon and the third busiest in London, excluding Travelcard Zone 1.",
"title": "Transport"
},
{
"paragraph_id": 72,
"text": "East Croydon was served by long distance Arriva CrossCountry services to Birmingham and the North of England until they were withdrawn in December 2008.",
"title": "Transport"
},
{
"paragraph_id": 73,
"text": "West Croydon is served by London Overground and Southern services north to Highbury & Islington, London Bridge and London Victoria, and south to Sutton and Epsom Downs.",
"title": "Transport"
},
{
"paragraph_id": 74,
"text": "South Croydon is mainly served by Network Rail services operated by Southern for suburban lines to and from London Bridge, London Victoria and the eastern part of Surrey.",
"title": "Transport"
},
{
"paragraph_id": 75,
"text": "Croydon is one of only five London Boroughs not to have at least one London Underground station within its boundaries, with the closest tube station being Morden.",
"title": "Transport"
},
{
"paragraph_id": 76,
"text": "A sizeable bus infrastructure which is part of the London Buses network operates from a hub at West Croydon bus station. The original bus station opened in May 1985, closing in October 2014. A new bus station opened in October 2016.",
"title": "Transport"
},
{
"paragraph_id": 77,
"text": "Addington Village Interchange is a regional bus terminal in Addington Village which has an interchange between Tramlink and bus services in the remote area. Services are operated under contract by Abellio London, Arriva London, London Central, Metrobus, Quality Line and Selkent.",
"title": "Transport"
},
{
"paragraph_id": 78,
"text": "The Tramlink light rail system opened in 2000, serving the borough and surrounding areas. Its network consists of three lines, from Elmers End to West Croydon, from Beckenham to West Croydon, and from New Addington to Wimbledon, with all three lines running via the Croydon loop on which it is centred. It is also the only tram system in London but there is another light rail system, the Docklands Light Railway. It serves Mitcham, Woodside, Addiscombe and the Purley Way retail and industrial area amongst others.",
"title": "Transport"
},
{
"paragraph_id": 79,
"text": "Croydon is linked into the national motorway network via the M23 and M25 orbital motorway. The M25 skirts the south of the borough, linking Croydon with other parts of London and the surrounding counties; the M23 branches from the M25 close to Coulsdon, linking the town with the south coast, Crawley, Reigate, and Gatwick Airport. The A23 connects the borough with the motorways. The A23 is the major trunk road through Croydon, linking it with central London, East Sussex, Horsham, and Littlehaven. The old London to Brighton road, passes through the west of the borough on Purley Way, bypassing the commercial centre of Croydon which it once did.",
"title": "Transport"
},
{
"paragraph_id": 80,
"text": "The A22 and A23 are the major trunk roads through Croydon. These both run north–south, connecting to each other in Purley. The A22 connects Croydon, its starting point, to East Grinstead, Tunbridge Wells, Uckfield, and Eastbourne. Other major roads generally radiate spoke-like from the town centre. The A23 road, cuts right through Croydon, and it starts from London and links to Brighton and Gatwick Airport .Wellesley Road is an urban dual carriageway which cuts through the middle of the central business district. It was constructed in the 1960s as part of a planned ring road for Croydon and includes an underpass, which allows traffic to avoid going into the town centre.",
"title": "Transport"
},
{
"paragraph_id": 81,
"text": "The closest international airport to Croydon is Gatwick Airport, which is located 19 miles (31 km) from the town centre. Gatwick Airport opened in August 1930 as an aerodrome and is a major international operational base for British Airways, EasyJet and Virgin Atlantic. It currently handles around 35 million passengers a year, making it London's second largest airport, and the second busiest airport in the United Kingdom after Heathrow. Heathrow, London City and Luton airports all lie within a two hours' drive of Croydon. Gatwick and Luton Airports are connected to Croydon by frequent direct trains, while Heathrow is accessible by the route SL7 bus.",
"title": "Transport"
},
{
"paragraph_id": 82,
"text": "Although hilly, Croydon is compact and has few major trunk roads running through it. It is on one of the Connect2 schemes which are part of the National Cycle Network route running around Croydon. The North Downs, an area of outstanding natural beauty popular with both on- and off-road cyclists, is so close to Croydon that part of the park lies within the borough boundary, and there are routes into the park almost from the civic centre.",
"title": "Transport"
},
{
"paragraph_id": 83,
"text": "In March 2011, the main forms of transport that residents used to travel to work were: driving a car or van, 20.2% of all residents aged 16–74; train, 59.5%; bus, minibus or coach, 7.5%; on foot, 5.1%; underground, metro, light rail, tram, 4.3%; work mainly at or from home, 2.9%; passenger in a car or van, 1.5%.",
"title": "Transport"
},
{
"paragraph_id": 84,
"text": "Home Office policing in Croydon is provided by the Metropolitan Police. The force's Croydon arm have their head offices for policing on Park Lane next to the Fairfield Halls and Croydon College in central Croydon. Public transport is co-ordinated by Transport for London. Statutory emergency fire and rescue service is provided by the London Fire Brigade, which has five stations in Croydon.",
"title": "Public services"
},
{
"paragraph_id": 85,
"text": "NHS South West London Clinical Commissioning Group (A merger of the previous NHS Croydon CCG and others in South West London) is the body responsible for public health and for planning and funding health services in the borough. Croydon has 227 GPs in 64 practices, 156 dentists in 51 practices, 166 pharmacists and 70 optometrists in 28 practices.",
"title": "Public services"
},
{
"paragraph_id": 86,
"text": "Croydon University Hospital, formerly known as Mayday Hospital, built on a 19-acre (7.7 ha) site in Thornton Heath at the west of Croydon's boundaries with Merton, is a large NHS hospital administered by Croydon Health Services NHS Trust. Former names of the hospital include the Croydon Union Infirmary from 1885 to 1923 and the Mayday Road Hospital from 1923 to around 1930. It is a District General Hospital with a 24-hour accident and emergency department. NHS Direct has a regional centre based at the hospital. The NHS Trust also provides services at Purley War Memorial Hospital, in Purley. Croydon General Hospital was on London Road but services transferred to Mayday, as the size of this hospital was insufficient to cope with the growing population of the borough. Sickle Cell and Thalassaemia Centre and the Emergency Minor Treatment Centre are other smaller hospitals operated by the Mayday in the borough. Cane Hill was a psychiatric hospital in Coulsdon.",
"title": "Public services"
},
{
"paragraph_id": 87,
"text": "Waste management is co-ordinated by the local authority. Unlike other waste disposal authorities in Greater London, Croydon's rubbish is collected independently and isn't part of a waste authority unit. Locally produced inert waste for disposal is sent to landfill in the south of Croydon. There have recently been calls by the ODPM to bring waste management powers to the Greater London Authority, giving it a waste function. The Mayor of London has made repeated attempts to bring the different waste authorities together, to form a single waste authority in London. This has faced significant opposition from existing authorities. However, it has had significant support from all other sectors and the surrounding regions managing most of London's waste. Croydon has the joint best recycling rate in London, at 36%, but the refuse collectors have been criticised for their rushed performance lacking quality. Croydon's distribution network operator for electricity is EDF Energy Networks; there are no power stations in the borough. Thames Water manages Croydon's drinking and waste water; water supplies being sourced from several local reservoirs, including Beckton and King George VI. Before 1971, Croydon Corporation was responsible for water treatment in the borough.",
"title": "Public services"
},
{
"paragraph_id": 88,
"text": "The borough of Croydon is 86.52 km, populating approximately 340,000 people. There are five fire stations within the borough; Addington (two pumping appliances), Croydon (two pumping appliances, incident response unit, fire rescue unit and a USAR appliance), Norbury (two pumping appliances), Purley (one pumping appliance) and Woodside (one pumping appliance). Purley has the largest station ground, but dealt with the fewest incidents during 2006/07.",
"title": "Public services"
},
{
"paragraph_id": 89,
"text": "The fire stations, as part of the Community Fire Safety scheme, visited 49 schools in 2006/2007.",
"title": "Public services"
},
{
"paragraph_id": 90,
"text": "The borough compared with the other London boroughs has the highest number of schools in it, with 26% of its population under 20 years old. They include primary schools (95), secondary schools (21) and four further education establishments. Croydon College has its main building in Central Croydon, it is a high rise building. John Ruskin College is one of the other colleges in the borough, located in Addington and Coulsdon College in Coulsdon. South Norwood has been the home of Spurgeon's College, a world-famous Baptist theological college, since 1923; Spurgeon's is located on South Norwood Hill and currently has some 1000 students. The London Borough of Croydon is the local education authority for the borough.",
"title": "Public services"
},
{
"paragraph_id": 91,
"text": "Overall, Croydon was ranked 77th out of all the local education authorities in the UK, up from 92nd in 2007. In 2007, the Croydon LEA was ranked 81st out of 149 in the country – and 21st in Greater London – based on the percentage of pupils attaining at least 5 A*–C grades at GCSE including maths and English (37.8% compared with the national average of 46.7%). The most successful public sector schools in 2010 were Harris City Academy Crystal Palace and Coloma Convent Girls' School. The percentage of pupils achieving 5 A*-C GCSEs including maths and English was above the national average in 2010.",
"title": "Public services"
},
{
"paragraph_id": 92,
"text": "The borough of Croydon has 14 libraries, a joint library and a mobile library. Many of the libraries were built a long time ago and therefore have become outdated, so the council started updating a few including Ashburton Library which moved from its former spot into the state-of-the-art Ashburton Learning Village complex which is on the former site of the old 'A Block' of Ashburton Community School which is now situated inside the centre. The library is now on one floor. This format was planned to be rolled out across all of the council's libraries but what was seen as costing too much.",
"title": "Public services"
},
{
"paragraph_id": 93,
"text": "South Norwood Library, New Addington Library, Shirley Library, Selsdon Library, Sanderstead Library, Broad Green, Purley Library, Coulsdon Library and Bradmore Green Library are examples of older council libraries. The main library is Croydon Central Library which holds many references, newspaper archives and a tourist information point (one of three in southeast London). Upper Norwood Library is a joint library with the London Borough of Lambeth. This means that both councils fund the library and its resources, but even though Lambeth have nearly doubled their funding for the library in the past several years Croydon has kept it the same, doubting the future of the library.",
"title": "Public services"
},
{
"paragraph_id": 94,
"text": "The borough has been criticised in the past for not having enough leisure facilities, maintaining the position of Croydon as a three star borough. Thornton Heath's ageing sports centre has been demolished and replaced by a newer more modern leisure centre. South Norwood Leisure Centre was closed down in 2006 so that it could be demolished and re-designed from scratch like Thornton Heath, at an estimated cost of around £10 million.",
"title": "Sport and leisure"
},
{
"paragraph_id": 95,
"text": "In May 2006 the Conservative Party took control of Croydon Council and decided a refurbishment would be more economical than rebuilding, this decision caused some controversy.",
"title": "Sport and leisure"
},
{
"paragraph_id": 96,
"text": "Sport Croydon, is the commercial arm for leisure in the borough. Fusion currently provides leisure services for the council, a contract previously held by Parkwood Leisure.",
"title": "Sport and leisure"
},
{
"paragraph_id": 97,
"text": "Football teams include Crystal Palace F.C., which play at Selhurst Park, and in the Premier League. AFC Croydon Athletic, whose nickname is The Rams, is a football club who play at Croydon Sports Arena along with Croydon F.C., both in the Combined Counties League and Holmesdale, who were founded in South Norwood but currently playing on Oakley Road in Bromley, currently in the Southern Counties East Football League.",
"title": "Sport and leisure"
},
{
"paragraph_id": 98,
"text": "Non-football teams that play in Croydon are Streatham-Croydon RFC, a rugby union club in Thornton Heath who play at Frant Road, as well as South London Storm Rugby League Club, based at Streatham's ground, who compete in the Rugby League Conference. Another rugby union club that play in Croydon is Croydon RFC, who play at Addington Road. The London Olympians are an American Football team that play in Division 1 South in the British American Football League. The Croydon Pirates are one of the most successful teams in the British Baseball Federation, though their ground is actually just located outside the borough in Sutton.",
"title": "Sport and leisure"
},
{
"paragraph_id": 99,
"text": "Croydon Amphibians SC plays in the Division 2 British Water Polo League. The team won the National League Division 2 in 2008.",
"title": "Sport and leisure"
},
{
"paragraph_id": 100,
"text": "Croydon has over 120 parks and open spaces, ranging from the 200-acre (0.81 km) Selsdon Wood Nature Reserve to many recreation grounds and sports fields scattered throughout the Borough.",
"title": "Sport and leisure"
},
{
"paragraph_id": 101,
"text": "Croydon has cut funding to the Warehouse Theatre.",
"title": "Culture"
},
{
"paragraph_id": 102,
"text": "In 2005, Croydon Council drew up a Public Art Strategy, with a vision intended to be accessible and to enhance people's enjoyment of their surroundings. The public art strategy delivered a new event called Croydon's Summer Festival hosted in Lloyd Park. The festival consists of two days of events. The first is called Croydon's World Party which is a free one-day event with three stages featuring world, jazz and dance music from the UK and internationally. The final days event is the Croydon Mela, a day of music with a mix of traditional Asian culture and east-meets-western club beats across four stages as well as dozens of food stalls and a funfair. It has attracted crowds of over 50,000 people. The strategy also created a creative industries hub in Old Town, ensured that public art is included in developments such as College Green and Ruskin Square and investigated the possibility of gallery space in the Cultural Quarter.",
"title": "Culture"
},
{
"paragraph_id": 103,
"text": "Fairfield Halls, Arnhem Gallery and the Ashcroft Theatre show productions that are held throughout the year such as drama, ballet, opera and pantomimes and can be converted to show films. It also contains the Arnhem Gallery civic hall and an art gallery. Other cultural activities, including shopping and exhibitions, are Surrey Street Market which is mainly a meat and vegetables market near the main shopping environment of Croydon. The market has a Royal Charter dating back to 1276. Airport House is a newly refurbished conference and exhibition centre inside part of Croydon Airport. The Whitgift Centre is the current main shopping centre in the borough. Centrale is a new shopping centre that houses many more familiar names, as well as Croydon's House of Fraser.",
"title": "Culture"
},
{
"paragraph_id": 104,
"text": "There are three local newspapers which operate within the borough. The Croydon Advertiser began life in 1869, and was in 2005 the third-best selling paid-for weekly newspaper in London. The Advertiser is Croydon's major paid-for weekly paper and is on sale every Friday in five geographical editions: Croydon; Sutton & Epsom; Coulsdon & Purley; New Addington; and Caterham. The paper converted from a broadsheet to a compact (tabloid) format on 31 March 2006. It was bought by Northcliffe Media which is part of the Daily Mail and General Trust group on 6 July 2007. The Croydon Post is a free newspaper available across the borough and is operated by the Advertiser group. The circulation of the newspaper was in 2008 more than the main title published by the Advertiser Group.",
"title": "Media"
},
{
"paragraph_id": 105,
"text": "The Croydon Guardian is another local weekly paper, which is paid for at newsagents but free at Croydon Council libraries and via deliveries. It is one of the best circulated local newspapers in London and once had the highest circulation in Croydon with around one thousand more copies distributed than The Post.",
"title": "Media"
},
{
"paragraph_id": 106,
"text": "The borough is served by the London regional versions of BBC and ITV coverage, from either the Crystal Palace or Croydon transmitters.",
"title": "Media"
},
{
"paragraph_id": 107,
"text": "Croydon Television is owned by Croydon broadcasting corporation. Broadcasting from studios in Croydon, the CBC is fully independent. It does not receive any government or local council grants or funding and is supported by donations, sponsorship and by commercial advertising.",
"title": "Media"
},
{
"paragraph_id": 108,
"text": "Capital Radio and Gold serve the borough. Local BBC radio is provided by BBC London 94.9. Other stations include Kiss 100, Absolute Radio and Magic 105.4 FM from Bauer Radio and Capital Xtra, Heart 106.2 and Smooth Radio from Global Radio. In 2012, Croydon Radio, an online and FM radio station, and the first official FM radio station for the London Borough of Croydon, began serving the area. The borough is also home to its own local TV station, Croydon TV.",
"title": "Media"
},
{
"paragraph_id": 109,
"text": "The London Borough of Croydon is twinned with the municipality of Arnhem which is located in the east of the Netherlands. The city of Arnhem is one of the 20 largest cities in the Netherlands. They have been twinned since 1946 after both towns had suffered extensive bomb damage during the recently ended war. There is also a Guyanese link supported by the council.",
"title": "Twinning"
},
{
"paragraph_id": 110,
"text": "In September 2009 it was revealed that Croydon Council had around £20m of its pension fund for employees invested in shares in Imperial Tobacco and British American Tobacco. Members of the opposition Labour group on the council, who had banned such shareholdings when in control, described this as \"dealing in death\" and inconsistent with the council's tobacco control strategy.",
"title": "Investment in the tobacco industry"
},
{
"paragraph_id": 111,
"text": "The following people and military units have received the Freedom of the Borough of Croydon.",
"title": "Freedom of the Borough"
},
{
"paragraph_id": 112,
"text": "",
"title": "Freedom of the Borough"
},
{
"paragraph_id": 113,
"text": "",
"title": "Freedom of the Borough"
}
] | The London Borough of Croydon is a London borough in south London, part of Outer London. It covers an area of 87 km2 (33.6 sq mi). It is the southernmost borough of London. At its centre is the historic town of Croydon from which the borough takes its name; while other urban centres include Coulsdon, Purley, South Norwood, Norbury, New Addington, Selsdon and Thornton Heath. Croydon is mentioned in Domesday Book, and from a small market town has expanded into one of the most populous areas on the fringe of London. The borough is now one of London's leading business, financial and cultural centres, and its influence in entertainment and the arts contribute to its status as a major metropolitan centre. Its population is 390,719, making it the largest London borough and sixteenth largest English district. The borough was formed in 1965 from the merger of the County Borough of Croydon with Coulsdon and Purley Urban District, both of which had been within Surrey. The local authority, Croydon London Borough Council, is now part of London Councils, the local government association for Greater London. The economic strength of Croydon dates back mainly to Croydon Airport which was a major factor in the development of Croydon as a business centre. Once London's main airport for all international flights to and from the capital, it was closed on 30 September 1959 due to the lack of expansion space needed for an airport to serve the growing city. It is now a Grade II listed building and tourist attraction. Croydon Council and its predecessor Croydon Corporation unsuccessfully applied for city status in 1954, 2000, 2002 and 2012. The area is currently going through a large regeneration project called Croydon Vision 2020 which is predicted to attract more businesses and tourists to the area as well as backing Croydon's bid to become "London's Third City". Croydon is mostly urban, though there are large suburban and rural uplands towards the south of the borough. Since 2003, Croydon has been certified as a Fairtrade borough by the Fairtrade Foundation. It was the first London borough to have Fairtrade status which is awarded on certain criteria. The area is one of the hearts of culture in London and the South East of England. Institutions such as the major arts and entertainment centre Fairfield Halls add to the vibrancy of the borough. However, its famous fringe theatre, the Warehouse Theatre, went into administration in 2012 when the council withdrew funding, and the building itself was demolished in 2013. The Croydon Clocktower was opened by Queen Elizabeth II in 1994 as an arts venue featuring a library, the independent David Lean Cinema and museum. From 2000 to 2010, Croydon staged an annual summer festival celebrating the area's black and Indian cultural diversity, with audiences reaching over 50,000 people. Premier League football club Crystal Palace F.C. play at Selhurst Park in Selhurst, a stadium they have been based in since 1924. Other landmarks in the borough include Addington Palace, an eighteenth-century mansion which became the official second residence of six Archbishops of Canterbury, Shirley Windmill, one of the few surviving large windmills in Greater London built in the 1850s, and the BRIT School, a creative arts institute run by the BRIT Trust which has produced artists such as Adele, Amy Winehouse and Leona Lewis. | 2001-11-20T04:55:46Z | 2023-12-04T20:49:08Z | [
"Template:Blockquote",
"Template:Main",
"Template:Reflist",
"Template:Dead link",
"Template:LB Croydon",
"Template:About",
"Template:Further",
"Template:Historical populations",
"Template:Clarify",
"Template:Citation needed",
"Template:Use dmy dates",
"Template:Audio",
"Template:Cite book",
"Template:Use British English",
"Template:Convert",
"Template:Portal",
"Template:Cite web",
"Template:Cite news",
"Template:ISBN",
"Template:Cite journal",
"Template:Climate chart",
"Template:Incomplete list",
"Template:London",
"Template:Authority control",
"Template:R",
"Template:Sister project links",
"Template:See also",
"Template:NHLE",
"Template:Webarchive",
"Template:Infobox settlement",
"Template:TOC limit"
] | https://en.wikipedia.org/wiki/London_Borough_of_Croydon |
7,188 | Carme (moon) | Carme /ˈkɑːrmiː/ is a retrograde irregular satellite of Jupiter. It was discovered by Seth Barnes Nicholson at Mount Wilson Observatory in California in July 1938. It is named after the mythological Carme, mother by Zeus of Britomartis, a Cretan goddess.
Carme did not receive its present name until 1975; before then, it was simply known as Jupiter XI. It was sometimes called "Pan" between 1955 and 1975 (Pan is now the name of a satellite of Saturn).
It gives its name to the Carme group, made up of irregular retrograde moons orbiting Jupiter at a distance ranging between 23 and 24 Gm and at an inclination of about 165°. Its orbital elements are as of January 2000. They are continuously changing due to solar and planetary perturbations.
With a diameter of 46.7±0.9 km, it is the largest member of the Carme group and the fourth largest irregular moon of Jupiter. It is light red in color (B−V=0.76, V−R=0.47), similar to D-type asteroids and consistent with Taygete, but not Kalyke. | [
{
"paragraph_id": 0,
"text": "Carme /ˈkɑːrmiː/ is a retrograde irregular satellite of Jupiter. It was discovered by Seth Barnes Nicholson at Mount Wilson Observatory in California in July 1938. It is named after the mythological Carme, mother by Zeus of Britomartis, a Cretan goddess.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Carme did not receive its present name until 1975; before then, it was simply known as Jupiter XI. It was sometimes called \"Pan\" between 1955 and 1975 (Pan is now the name of a satellite of Saturn).",
"title": "History"
},
{
"paragraph_id": 2,
"text": "It gives its name to the Carme group, made up of irregular retrograde moons orbiting Jupiter at a distance ranging between 23 and 24 Gm and at an inclination of about 165°. Its orbital elements are as of January 2000. They are continuously changing due to solar and planetary perturbations.",
"title": "History"
},
{
"paragraph_id": 3,
"text": "With a diameter of 46.7±0.9 km, it is the largest member of the Carme group and the fourth largest irregular moon of Jupiter. It is light red in color (B−V=0.76, V−R=0.47), similar to D-type asteroids and consistent with Taygete, but not Kalyke.",
"title": "Properties"
}
] | Carme is a retrograde irregular satellite of Jupiter. It was discovered by Seth Barnes Nicholson at Mount Wilson Observatory in California in July 1938. It is named after the mythological Carme, mother by Zeus of Britomartis, a Cretan goddess. | 2023-01-11T00:55:15Z | [
"Template:Short description",
"Template:Use dmy dates",
"Template:Infobox planet",
"Template:Cite journal",
"Template:Moons of Jupiter",
"Template:IPAc-en",
"Template:Nowrap",
"Template:Val",
"Template:Reflist",
"Template:Cite book"
] | https://en.wikipedia.org/wiki/Carme_(moon) |
|
7,193 | Commutator | In mathematics, the commutator gives an indication of the extent to which a certain binary operation fails to be commutative. There are different definitions used in group theory and ring theory.
The commutator of two elements, g and h, of a group G, is the element
This element is equal to the group's identity if and only if g and h commute (from the definition gh = hg [g, h], being [g, h] equal to the identity if and only if gh = hg).
The set of all commutators of a group is not in general closed under the group operation, but the subgroup of G generated by all commutators is closed and is called the derived group or the commutator subgroup of G. Commutators are used to define nilpotent and solvable groups and the largest abelian quotient group.
The definition of the commutator above is used throughout this article, but many other group theorists define the commutator as
Commutator identities are an important tool in group theory. The expression a denotes the conjugate of a by x, defined as xax.
Identity (5) is also known as the Hall–Witt identity, after Philip Hall and Ernst Witt. It is a group-theoretic analogue of the Jacobi identity for the ring-theoretic commutator (see next section).
N.B., the above definition of the conjugate of a by x is used by some group theorists. Many other group theorists define the conjugate of a by x as xax. This is often written x a {\displaystyle {}^{x}a} . Similar identities hold for these conventions.
Many identities are used that are true modulo certain subgroups. These can be particularly useful in the study of solvable groups and nilpotent groups. For instance, in any group, second powers behave well:
If the derived subgroup is central, then
Rings often do not support division. Thus, the commutator of two elements a and b of a ring (or any associative algebra) is defined differently by
The commutator is zero if and only if a and b commute. In linear algebra, if two endomorphisms of a space are represented by commuting matrices in terms of one basis, then they are so represented in terms of every basis. By using the commutator as a Lie bracket, every associative algebra can be turned into a Lie algebra.
The anticommutator of two elements a and b of a ring or associative algebra is defined by
Sometimes [ a , b ] + {\displaystyle [a,b]_{+}} is used to denote anticommutator, while [ a , b ] − {\displaystyle [a,b]_{-}} is then used for commutator. The anticommutator is used less often, but can be used to define Clifford algebras and Jordan algebras and in the derivation of the Dirac equation in particle physics.
The commutator of two operators acting on a Hilbert space is a central concept in quantum mechanics, since it quantifies how well the two observables described by these operators can be measured simultaneously. The uncertainty principle is ultimately a theorem about such commutators, by virtue of the Robertson–Schrödinger relation. In phase space, equivalent commutators of function star-products are called Moyal brackets and are completely isomorphic to the Hilbert space commutator structures mentioned.
The commutator has the following properties:
Relation (3) is called anticommutativity, while (4) is the Jacobi identity.
If A is a fixed element of a ring R, identity (1) can be interpreted as a Leibniz rule for the map ad A : R → R {\displaystyle \operatorname {ad} _{A}:R\rightarrow R} given by ad A ( B ) = [ A , B ] {\displaystyle \operatorname {ad} _{A}(B)=[A,B]} . In other words, the map adA defines a derivation on the ring R. Identities (2), (3) represent Leibniz rules for more than two factors, and are valid for any derivation. Identities (4)–(6) can also be interpreted as Leibniz rules. Identities (7), (8) express Z-bilinearity.
From identity (9), one finds that the commutator of integer powers of ring elements is:
[ A N , B M ] = ∑ n = 0 N − 1 ∑ m = 0 M − 1 A n B m [ A , B ] A N − n − 1 B M − m − 1 {\displaystyle [A^{N},B^{M}]=\sum _{n=0}^{N-1}\sum _{m=0}^{M-1}A^{n}B^{m}[A,B]A^{N-n-1}B^{M-m-1}}
Some of the above identities can be extended to the anticommutator using the above ± subscript notation. For example:
Consider a ring or algebra in which the exponential e A = exp ( A ) = 1 + A + 1 2 ! A 2 + ⋯ {\displaystyle e^{A}=\exp(A)=1+A+{\tfrac {1}{2!}}A^{2}+\cdots } can be meaningfully defined, such as a Banach algebra or a ring of formal power series.
In such a ring, Hadamard's lemma applied to nested commutators gives: e A B e − A = B + [ A , B ] + 1 2 ! [ A , [ A , B ] ] + 1 3 ! [ A , [ A , [ A , B ] ] ] + ⋯ = e ad A ( B ) . {\textstyle e^{A}Be^{-A}\ =\ B+[A,B]+{\frac {1}{2!}}[A,[A,B]]+{\frac {1}{3!}}[A,[A,[A,B]]]+\cdots \ =\ e^{\operatorname {ad} _{A}}(B).} (For the last expression, see Adjoint derivation below.) This formula underlies the Baker–Campbell–Hausdorff expansion of log(exp(A) exp(B)).
A similar expansion expresses the group commutator of expressions e A {\displaystyle e^{A}} (analogous to elements of a Lie group) in terms of a series of nested commutators (Lie brackets),
When dealing with graded algebras, the commutator is usually replaced by the graded commutator, defined in homogeneous components as
Especially if one deals with multiple commutators in a ring R, another notation turns out to be useful. For an element x ∈ R {\displaystyle x\in R} , we define the adjoint mapping a d x : R → R {\displaystyle \mathrm {ad} _{x}:R\to R} by:
This mapping is a derivation on the ring R:
By the Jacobi identity, it is also a derivation over the commutation operation:
Composing such mappings, we get for example ad x ad y ( z ) = [ x , [ y , z ] ] {\displaystyle \operatorname {ad} _{x}\operatorname {ad} _{y}(z)=[x,[y,z]\,]} and
We may consider a d {\displaystyle \mathrm {ad} } itself as a mapping, a d : R → E n d ( R ) {\displaystyle \mathrm {ad} :R\to \mathrm {End} (R)} , where E n d ( R ) {\displaystyle \mathrm {End} (R)} is the ring of mappings from R to itself with composition as the multiplication operation. Then a d {\displaystyle \mathrm {ad} } is a Lie algebra homomorphism, preserving the commutator:
By contrast, it is not always a ring homomorphism: usually ad x y ≠ ad x ad y {\displaystyle \operatorname {ad} _{xy}\,\neq \,\operatorname {ad} _{x}\operatorname {ad} _{y}} .
The general Leibniz rule, expanding repeated derivatives of a product, can be written abstractly using the adjoint representation:
Replacing x by the differentiation operator ∂ {\displaystyle \partial } , and y by the multiplication operator m f : g ↦ f g {\displaystyle m_{f}:g\mapsto fg} , we get ad ( ∂ ) ( m f ) = m ∂ ( f ) {\displaystyle \operatorname {ad} (\partial )(m_{f})=m_{\partial (f)}} , and applying both sides to a function g, the identity becomes the usual Leibniz rule for the n-th derivative ∂ n ( f g ) {\displaystyle \partial ^{n}\!(fg)} . | [
{
"paragraph_id": 0,
"text": "In mathematics, the commutator gives an indication of the extent to which a certain binary operation fails to be commutative. There are different definitions used in group theory and ring theory.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The commutator of two elements, g and h, of a group G, is the element",
"title": "Group theory"
},
{
"paragraph_id": 2,
"text": "This element is equal to the group's identity if and only if g and h commute (from the definition gh = hg [g, h], being [g, h] equal to the identity if and only if gh = hg).",
"title": "Group theory"
},
{
"paragraph_id": 3,
"text": "The set of all commutators of a group is not in general closed under the group operation, but the subgroup of G generated by all commutators is closed and is called the derived group or the commutator subgroup of G. Commutators are used to define nilpotent and solvable groups and the largest abelian quotient group.",
"title": "Group theory"
},
{
"paragraph_id": 4,
"text": "The definition of the commutator above is used throughout this article, but many other group theorists define the commutator as",
"title": "Group theory"
},
{
"paragraph_id": 5,
"text": "Commutator identities are an important tool in group theory. The expression a denotes the conjugate of a by x, defined as xax.",
"title": "Group theory"
},
{
"paragraph_id": 6,
"text": "Identity (5) is also known as the Hall–Witt identity, after Philip Hall and Ernst Witt. It is a group-theoretic analogue of the Jacobi identity for the ring-theoretic commutator (see next section).",
"title": "Group theory"
},
{
"paragraph_id": 7,
"text": "N.B., the above definition of the conjugate of a by x is used by some group theorists. Many other group theorists define the conjugate of a by x as xax. This is often written x a {\\displaystyle {}^{x}a} . Similar identities hold for these conventions.",
"title": "Group theory"
},
{
"paragraph_id": 8,
"text": "Many identities are used that are true modulo certain subgroups. These can be particularly useful in the study of solvable groups and nilpotent groups. For instance, in any group, second powers behave well:",
"title": "Group theory"
},
{
"paragraph_id": 9,
"text": "If the derived subgroup is central, then",
"title": "Group theory"
},
{
"paragraph_id": 10,
"text": "Rings often do not support division. Thus, the commutator of two elements a and b of a ring (or any associative algebra) is defined differently by",
"title": "Ring theory"
},
{
"paragraph_id": 11,
"text": "The commutator is zero if and only if a and b commute. In linear algebra, if two endomorphisms of a space are represented by commuting matrices in terms of one basis, then they are so represented in terms of every basis. By using the commutator as a Lie bracket, every associative algebra can be turned into a Lie algebra.",
"title": "Ring theory"
},
{
"paragraph_id": 12,
"text": "The anticommutator of two elements a and b of a ring or associative algebra is defined by",
"title": "Ring theory"
},
{
"paragraph_id": 13,
"text": "Sometimes [ a , b ] + {\\displaystyle [a,b]_{+}} is used to denote anticommutator, while [ a , b ] − {\\displaystyle [a,b]_{-}} is then used for commutator. The anticommutator is used less often, but can be used to define Clifford algebras and Jordan algebras and in the derivation of the Dirac equation in particle physics.",
"title": "Ring theory"
},
{
"paragraph_id": 14,
"text": "The commutator of two operators acting on a Hilbert space is a central concept in quantum mechanics, since it quantifies how well the two observables described by these operators can be measured simultaneously. The uncertainty principle is ultimately a theorem about such commutators, by virtue of the Robertson–Schrödinger relation. In phase space, equivalent commutators of function star-products are called Moyal brackets and are completely isomorphic to the Hilbert space commutator structures mentioned.",
"title": "Ring theory"
},
{
"paragraph_id": 15,
"text": "The commutator has the following properties:",
"title": "Ring theory"
},
{
"paragraph_id": 16,
"text": "Relation (3) is called anticommutativity, while (4) is the Jacobi identity.",
"title": "Ring theory"
},
{
"paragraph_id": 17,
"text": "If A is a fixed element of a ring R, identity (1) can be interpreted as a Leibniz rule for the map ad A : R → R {\\displaystyle \\operatorname {ad} _{A}:R\\rightarrow R} given by ad A ( B ) = [ A , B ] {\\displaystyle \\operatorname {ad} _{A}(B)=[A,B]} . In other words, the map adA defines a derivation on the ring R. Identities (2), (3) represent Leibniz rules for more than two factors, and are valid for any derivation. Identities (4)–(6) can also be interpreted as Leibniz rules. Identities (7), (8) express Z-bilinearity.",
"title": "Ring theory"
},
{
"paragraph_id": 18,
"text": "From identity (9), one finds that the commutator of integer powers of ring elements is:",
"title": "Ring theory"
},
{
"paragraph_id": 19,
"text": "[ A N , B M ] = ∑ n = 0 N − 1 ∑ m = 0 M − 1 A n B m [ A , B ] A N − n − 1 B M − m − 1 {\\displaystyle [A^{N},B^{M}]=\\sum _{n=0}^{N-1}\\sum _{m=0}^{M-1}A^{n}B^{m}[A,B]A^{N-n-1}B^{M-m-1}}",
"title": "Ring theory"
},
{
"paragraph_id": 20,
"text": "Some of the above identities can be extended to the anticommutator using the above ± subscript notation. For example:",
"title": "Ring theory"
},
{
"paragraph_id": 21,
"text": "Consider a ring or algebra in which the exponential e A = exp ( A ) = 1 + A + 1 2 ! A 2 + ⋯ {\\displaystyle e^{A}=\\exp(A)=1+A+{\\tfrac {1}{2!}}A^{2}+\\cdots } can be meaningfully defined, such as a Banach algebra or a ring of formal power series.",
"title": "Ring theory"
},
{
"paragraph_id": 22,
"text": "In such a ring, Hadamard's lemma applied to nested commutators gives: e A B e − A = B + [ A , B ] + 1 2 ! [ A , [ A , B ] ] + 1 3 ! [ A , [ A , [ A , B ] ] ] + ⋯ = e ad A ( B ) . {\\textstyle e^{A}Be^{-A}\\ =\\ B+[A,B]+{\\frac {1}{2!}}[A,[A,B]]+{\\frac {1}{3!}}[A,[A,[A,B]]]+\\cdots \\ =\\ e^{\\operatorname {ad} _{A}}(B).} (For the last expression, see Adjoint derivation below.) This formula underlies the Baker–Campbell–Hausdorff expansion of log(exp(A) exp(B)).",
"title": "Ring theory"
},
{
"paragraph_id": 23,
"text": "A similar expansion expresses the group commutator of expressions e A {\\displaystyle e^{A}} (analogous to elements of a Lie group) in terms of a series of nested commutators (Lie brackets),",
"title": "Ring theory"
},
{
"paragraph_id": 24,
"text": "When dealing with graded algebras, the commutator is usually replaced by the graded commutator, defined in homogeneous components as",
"title": "Graded rings and algebras"
},
{
"paragraph_id": 25,
"text": "Especially if one deals with multiple commutators in a ring R, another notation turns out to be useful. For an element x ∈ R {\\displaystyle x\\in R} , we define the adjoint mapping a d x : R → R {\\displaystyle \\mathrm {ad} _{x}:R\\to R} by:",
"title": "Adjoint derivation"
},
{
"paragraph_id": 26,
"text": "This mapping is a derivation on the ring R:",
"title": "Adjoint derivation"
},
{
"paragraph_id": 27,
"text": "By the Jacobi identity, it is also a derivation over the commutation operation:",
"title": "Adjoint derivation"
},
{
"paragraph_id": 28,
"text": "Composing such mappings, we get for example ad x ad y ( z ) = [ x , [ y , z ] ] {\\displaystyle \\operatorname {ad} _{x}\\operatorname {ad} _{y}(z)=[x,[y,z]\\,]} and",
"title": "Adjoint derivation"
},
{
"paragraph_id": 29,
"text": "We may consider a d {\\displaystyle \\mathrm {ad} } itself as a mapping, a d : R → E n d ( R ) {\\displaystyle \\mathrm {ad} :R\\to \\mathrm {End} (R)} , where E n d ( R ) {\\displaystyle \\mathrm {End} (R)} is the ring of mappings from R to itself with composition as the multiplication operation. Then a d {\\displaystyle \\mathrm {ad} } is a Lie algebra homomorphism, preserving the commutator:",
"title": "Adjoint derivation"
},
{
"paragraph_id": 30,
"text": "By contrast, it is not always a ring homomorphism: usually ad x y ≠ ad x ad y {\\displaystyle \\operatorname {ad} _{xy}\\,\\neq \\,\\operatorname {ad} _{x}\\operatorname {ad} _{y}} .",
"title": "Adjoint derivation"
},
{
"paragraph_id": 31,
"text": "The general Leibniz rule, expanding repeated derivatives of a product, can be written abstractly using the adjoint representation:",
"title": "Adjoint derivation"
},
{
"paragraph_id": 32,
"text": "Replacing x by the differentiation operator ∂ {\\displaystyle \\partial } , and y by the multiplication operator m f : g ↦ f g {\\displaystyle m_{f}:g\\mapsto fg} , we get ad ( ∂ ) ( m f ) = m ∂ ( f ) {\\displaystyle \\operatorname {ad} (\\partial )(m_{f})=m_{\\partial (f)}} , and applying both sides to a function g, the identity becomes the usual Leibniz rule for the n-th derivative ∂ n ( f g ) {\\displaystyle \\partial ^{n}\\!(fg)} .",
"title": "Adjoint derivation"
}
] | In mathematics, the commutator gives an indication of the extent to which a certain binary operation fails to be commutative. There are different definitions used in group theory and ring theory. | 2001-11-20T16:05:38Z | 2023-11-03T19:13:34Z | [
"Template:Use shortened footnotes",
"Template:Mvar",
"Template:Reflist",
"Template:Springer",
"Template:Authority control",
"Template:About",
"Template:Math",
"Template:Harvtxt",
"Template:Citation",
"Template:Short description"
] | https://en.wikipedia.org/wiki/Commutator |
7,196 | Cairn | A cairn is a human-made pile (or stack) of stones raised for a purpose, usually as a marker or as a burial mound. The word cairn comes from the Scottish Gaelic: càrn [ˈkʰaːrˠn̪ˠ] (plural càirn [ˈkʰaːrˠɲ]).
Cairns have been and are used for a broad variety of purposes. In prehistory, they were raised as markers, as memorials and as burial monuments (some of which contained chambers). In the modern era, cairns are often raised as landmarks, especially to mark the summits of mountains. Cairns are also used as trail markers. They vary in size from small stone markers to entire artificial hills, and in complexity from loose conical rock piles to elaborate megalithic structures. Cairns may be painted or otherwise decorated, whether for increased visibility or for religious reasons.
A variant is the inuksuk (plural inuksuit), used by the Inuit and other peoples of the Arctic region of North America.
The building of cairns for various purposes goes back into prehistory in Eurasia, ranging in size from small rock sculptures to substantial human-made hills of stone (some built on top of larger, natural hills). The latter are often relatively massive Bronze Age or earlier structures which, like kistvaens and dolmens, frequently contain burials; they are comparable to tumuli (kurgans), but of stone construction instead of earthworks. Cairn originally could more broadly refer to various types of hills and natural stone piles, but today is used exclusively of artificial ones.
The word cairn derives from Scots cairn (with the same meaning), in turn from Scottish Gaelic càrn, which is essentially the same as the corresponding words in other native Celtic languages of Britain, Ireland and Brittany, including Welsh carn (and carnedd), Breton karn, Irish carn, and Cornish karn or carn. Cornwall (Kernow) itself may actually be named after the cairns that dot its landscape, such as Cornwall's highest point, Brown Willy Summit Cairn, a 5 m (16 ft) high and 24 m (79 ft) diameter mound atop Brown Willy hill in Bodmin Moor, an area with many ancient cairns. Burial cairns and other megaliths are the subject of a variety of legends and folklore throughout Britain and Ireland. In Scotland, it is traditional to carry a stone up from the bottom of a hill to place on a cairn at its top. In such a fashion, cairns would grow ever larger. An old Scottish Gaelic blessing is Cuiridh mi clach air do chàrn, "I'll put a stone on your cairn". In Highland folklore it is recounted that before Highland clans fought in a battle, each man would place a stone in a pile. Those who survived the battle returned and removed a stone from the pile. The stones that remained were built into a cairn to honour the dead. Cairns in the region were also put to vital practical use. For example, Dún Aonghasa, an all-stone Iron Age Irish hill fort on Inishmore in the Aran Islands, is still surrounded by small cairns and strategically placed jutting rocks, used collectively as an alternative to defensive earthworks because of the karst landscape's lack of soil. In February 2020, ancient cairns dated back to 4,500 year-old used to bury the leaders or chieftains of neolithic tribes people were revealed in the Cwmcelyn in Blaenau Gwent by the Aberystruth Archaeological Society.
In Scandinavia, cairns have been used for centuries as trail and sea marks, among other purposes, the most notable being the Three-Country Cairn. In Iceland, cairns were often used as markers along the numerous single-file roads or paths that crisscrossed the island; many of these ancient cairns are still standing, although the paths have disappeared. In Norse Greenland, cairns were used as a hunting implement, a game-driving "lane", used to direct reindeer towards a game jump.
In the mythology of ancient Greece, cairns were associated with Hermes, the god of overland travel. According to one legend, Hermes was put on trial by Hera for slaying her favorite servant, the monster Argus. All of the other gods acted as a jury, and as a way of declaring their verdict they were given pebbles, and told to throw them at whichever person they deemed to be in the right, Hermes or Hera. Hermes argued so skillfully that he ended up buried under a heap of pebbles, and this was the first cairn. In Croatia, in areas of ancient Dalmatia, such as Herzegovina and the Krajina, they are known as gromila.
In Portugal, a cairn is called a moledro. In a legend the moledros are enchanted soldiers, and if one stone is taken from the pile and put under a pillow, in the morning a soldier will appear for a brief moment, then will change back to a stone and magically return to the pile. The cairns that mark the place where someone died or cover the graves alongside the roads where in the past people were buried are called Fiéis de Deus. The same name given to the stones was given to the dead whose identity was unknown.
Cairns (taalo) are a common feature at El Ayo, Haylan, Qa'ableh, Qombo'ul, Heis, Salweyn and Gelweita, among other places. Somaliland in general is home to a lot of such historical settlements and archaeological sites wherein are found numerous ancient ruins and buildings, many of obscure origins. However, many of these old structures have yet to be properly explored, a process which would help shed further light on local history and facilitate their preservation for posterity.
Since Neolithic times, the climate of North Africa has become drier. A reminder of the desertification of the area is provided by megalithic remains, which occur in a great variety of forms and in vast numbers in presently arid and uninhabitable wastelands: cairns (kerkour), dolmens and circles like Stonehenge, underground cells excavated in rock, barrows topped with huge slabs, and step pyramid-like mounds.
The Biblical place name Gilead (Genesis 31 etc.) means literally "heap of testimony/evidence" as does its Aramaic translation (ibid.) Yegar Sahaduta. In modern Hebrew, gal-'ed (גל-עד) is the actual word for "cairn". In Genesis 31 the cairn of Gilead was set up as a border demarcation between Jacob and his father-in-law Laban at their last meeting.
Starting in the Bronze Age, burial cists were sometimes interred into cairns, which would be situated in conspicuous positions, often on the skyline above the village of the deceased. Though most often found in the British Isles, evidence of Bronze Age cists have been found in Mongolia. The stones may have been thought to deter grave robbers and scavengers. Another explanation is that they were to stop the dead from rising. There remains a Jewish tradition of placing small stones on a person's grave as a token of respect, known as visitation stones, though this is generally to relate the longevity of stone to the eternal nature of the soul and is not usually done in a cairn fashion. Stupas in India and Tibet probably started out in a similar fashion, although they now generally contain the ashes of a Buddhist saint or lama.
A traditional and often decorated, heap-formed cairn called an ovoo is made in Mongolia. It primarily serves religious purposes, and finds use in both Tengriist and Buddhist ceremonies. Ovoos were also often used as landmarks and meeting points in traditional nomadic Mongolian culture. Traditional ceremonies still take place at ovoos today, and in a survey conducted, 75 participants out of 144 participants stated that they believe in ovoo ceremonies. However, mining and other industrial operations today threaten the ovoos
In Hawaii, cairns, called by the Hawaiian word ahu, are still being built today. Though in other cultures, the cairns were typically used as trail markers and sometimes funerary sites, the ancient Hawaiians also used them as altars or security tower. The Hawaiian people are still building these cairns today, using them as the focal points for ceremonies honoring their ancestors and spirituality.
In South Korea, cairns are quite prevalent, often found along roadsides and trails, up on mountain peaks, and adjacent to Buddhist temples. Hikers frequently add stones to existing cairns trying to get just one more on top of the pile, to bring good luck. This tradition has its roots in the worship of San-shin, or Mountain Spirit, so often still revered in Korean culture.
Throughout what today are the continental United States and Canada, some Indigenous peoples of the Americas have built structures similar to cairns. In some cases, these are general trail markers, and in other cases they mark game-driving "lanes", such as those leading to buffalo jumps.
Peoples from some of the Indigenous cultures of arctic North America (i.e. northern Canada, Alaska and Greenland) have built carefully constructed stone sculptures called inuksuit and inunnguat, which serve as landmarks and directional markers. The oldest of these structures are very old and pre-date contact with Europeans. They are iconic of the region (an inuksuk even features on the flag of the Canadian far-northeastern territory, Nunavut).
Cairns have been used throughout what is now Latin America, since pre-Columbian times, to mark trails. Even today, in the Andes of South America, the Quechuan peoples build cairns as part of their spiritual and religious traditions.
Cairn can be used to mark hiking trails, especially in mountain regions at or above the tree line. Examples can be seen in the lava fields of Volcanoes National Park to mark several hikes. Placed at regular intervals, a series of cairns can be used to indicate a path across stony or barren terrain, even across glaciers. In Acadia National Park, in Maine, the trails are marked by a special type of cairn instituted in the 1890s by Waldron Bates and dubbed Bates cairns.
Coastal cairns called sea marks are also common in the northern latitudes, especially in the island-strewn waters of Scandinavia and eastern Canada. They are placed along shores and on islands and islets. Usually painted white for improved offshore visibility, they serve as navigation aids. In Sweden, they are called kummel, in Finland kummeli, in Norway varde, and are indicated in navigation charts and maintained as part of the nautical marking system.
This article incorporates text from a publication now in the public domain: Chisholm, Hugh, ed. (1911). "Cairn". Encyclopædia Britannica (11th ed.). Cambridge University Press. | [
{
"paragraph_id": 0,
"text": "A cairn is a human-made pile (or stack) of stones raised for a purpose, usually as a marker or as a burial mound. The word cairn comes from the Scottish Gaelic: càrn [ˈkʰaːrˠn̪ˠ] (plural càirn [ˈkʰaːrˠɲ]).",
"title": ""
},
{
"paragraph_id": 1,
"text": "Cairns have been and are used for a broad variety of purposes. In prehistory, they were raised as markers, as memorials and as burial monuments (some of which contained chambers). In the modern era, cairns are often raised as landmarks, especially to mark the summits of mountains. Cairns are also used as trail markers. They vary in size from small stone markers to entire artificial hills, and in complexity from loose conical rock piles to elaborate megalithic structures. Cairns may be painted or otherwise decorated, whether for increased visibility or for religious reasons.",
"title": ""
},
{
"paragraph_id": 2,
"text": "A variant is the inuksuk (plural inuksuit), used by the Inuit and other peoples of the Arctic region of North America.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The building of cairns for various purposes goes back into prehistory in Eurasia, ranging in size from small rock sculptures to substantial human-made hills of stone (some built on top of larger, natural hills). The latter are often relatively massive Bronze Age or earlier structures which, like kistvaens and dolmens, frequently contain burials; they are comparable to tumuli (kurgans), but of stone construction instead of earthworks. Cairn originally could more broadly refer to various types of hills and natural stone piles, but today is used exclusively of artificial ones.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "The word cairn derives from Scots cairn (with the same meaning), in turn from Scottish Gaelic càrn, which is essentially the same as the corresponding words in other native Celtic languages of Britain, Ireland and Brittany, including Welsh carn (and carnedd), Breton karn, Irish carn, and Cornish karn or carn. Cornwall (Kernow) itself may actually be named after the cairns that dot its landscape, such as Cornwall's highest point, Brown Willy Summit Cairn, a 5 m (16 ft) high and 24 m (79 ft) diameter mound atop Brown Willy hill in Bodmin Moor, an area with many ancient cairns. Burial cairns and other megaliths are the subject of a variety of legends and folklore throughout Britain and Ireland. In Scotland, it is traditional to carry a stone up from the bottom of a hill to place on a cairn at its top. In such a fashion, cairns would grow ever larger. An old Scottish Gaelic blessing is Cuiridh mi clach air do chàrn, \"I'll put a stone on your cairn\". In Highland folklore it is recounted that before Highland clans fought in a battle, each man would place a stone in a pile. Those who survived the battle returned and removed a stone from the pile. The stones that remained were built into a cairn to honour the dead. Cairns in the region were also put to vital practical use. For example, Dún Aonghasa, an all-stone Iron Age Irish hill fort on Inishmore in the Aran Islands, is still surrounded by small cairns and strategically placed jutting rocks, used collectively as an alternative to defensive earthworks because of the karst landscape's lack of soil. In February 2020, ancient cairns dated back to 4,500 year-old used to bury the leaders or chieftains of neolithic tribes people were revealed in the Cwmcelyn in Blaenau Gwent by the Aberystruth Archaeological Society.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "In Scandinavia, cairns have been used for centuries as trail and sea marks, among other purposes, the most notable being the Three-Country Cairn. In Iceland, cairns were often used as markers along the numerous single-file roads or paths that crisscrossed the island; many of these ancient cairns are still standing, although the paths have disappeared. In Norse Greenland, cairns were used as a hunting implement, a game-driving \"lane\", used to direct reindeer towards a game jump.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "In the mythology of ancient Greece, cairns were associated with Hermes, the god of overland travel. According to one legend, Hermes was put on trial by Hera for slaying her favorite servant, the monster Argus. All of the other gods acted as a jury, and as a way of declaring their verdict they were given pebbles, and told to throw them at whichever person they deemed to be in the right, Hermes or Hera. Hermes argued so skillfully that he ended up buried under a heap of pebbles, and this was the first cairn. In Croatia, in areas of ancient Dalmatia, such as Herzegovina and the Krajina, they are known as gromila.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "In Portugal, a cairn is called a moledro. In a legend the moledros are enchanted soldiers, and if one stone is taken from the pile and put under a pillow, in the morning a soldier will appear for a brief moment, then will change back to a stone and magically return to the pile. The cairns that mark the place where someone died or cover the graves alongside the roads where in the past people were buried are called Fiéis de Deus. The same name given to the stones was given to the dead whose identity was unknown.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "Cairns (taalo) are a common feature at El Ayo, Haylan, Qa'ableh, Qombo'ul, Heis, Salweyn and Gelweita, among other places. Somaliland in general is home to a lot of such historical settlements and archaeological sites wherein are found numerous ancient ruins and buildings, many of obscure origins. However, many of these old structures have yet to be properly explored, a process which would help shed further light on local history and facilitate their preservation for posterity.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "Since Neolithic times, the climate of North Africa has become drier. A reminder of the desertification of the area is provided by megalithic remains, which occur in a great variety of forms and in vast numbers in presently arid and uninhabitable wastelands: cairns (kerkour), dolmens and circles like Stonehenge, underground cells excavated in rock, barrows topped with huge slabs, and step pyramid-like mounds.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "The Biblical place name Gilead (Genesis 31 etc.) means literally \"heap of testimony/evidence\" as does its Aramaic translation (ibid.) Yegar Sahaduta. In modern Hebrew, gal-'ed (גל-עד) is the actual word for \"cairn\". In Genesis 31 the cairn of Gilead was set up as a border demarcation between Jacob and his father-in-law Laban at their last meeting.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "Starting in the Bronze Age, burial cists were sometimes interred into cairns, which would be situated in conspicuous positions, often on the skyline above the village of the deceased. Though most often found in the British Isles, evidence of Bronze Age cists have been found in Mongolia. The stones may have been thought to deter grave robbers and scavengers. Another explanation is that they were to stop the dead from rising. There remains a Jewish tradition of placing small stones on a person's grave as a token of respect, known as visitation stones, though this is generally to relate the longevity of stone to the eternal nature of the soul and is not usually done in a cairn fashion. Stupas in India and Tibet probably started out in a similar fashion, although they now generally contain the ashes of a Buddhist saint or lama.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "A traditional and often decorated, heap-formed cairn called an ovoo is made in Mongolia. It primarily serves religious purposes, and finds use in both Tengriist and Buddhist ceremonies. Ovoos were also often used as landmarks and meeting points in traditional nomadic Mongolian culture. Traditional ceremonies still take place at ovoos today, and in a survey conducted, 75 participants out of 144 participants stated that they believe in ovoo ceremonies. However, mining and other industrial operations today threaten the ovoos",
"title": "History"
},
{
"paragraph_id": 13,
"text": "In Hawaii, cairns, called by the Hawaiian word ahu, are still being built today. Though in other cultures, the cairns were typically used as trail markers and sometimes funerary sites, the ancient Hawaiians also used them as altars or security tower. The Hawaiian people are still building these cairns today, using them as the focal points for ceremonies honoring their ancestors and spirituality.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "In South Korea, cairns are quite prevalent, often found along roadsides and trails, up on mountain peaks, and adjacent to Buddhist temples. Hikers frequently add stones to existing cairns trying to get just one more on top of the pile, to bring good luck. This tradition has its roots in the worship of San-shin, or Mountain Spirit, so often still revered in Korean culture.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "Throughout what today are the continental United States and Canada, some Indigenous peoples of the Americas have built structures similar to cairns. In some cases, these are general trail markers, and in other cases they mark game-driving \"lanes\", such as those leading to buffalo jumps.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "Peoples from some of the Indigenous cultures of arctic North America (i.e. northern Canada, Alaska and Greenland) have built carefully constructed stone sculptures called inuksuit and inunnguat, which serve as landmarks and directional markers. The oldest of these structures are very old and pre-date contact with Europeans. They are iconic of the region (an inuksuk even features on the flag of the Canadian far-northeastern territory, Nunavut).",
"title": "History"
},
{
"paragraph_id": 17,
"text": "Cairns have been used throughout what is now Latin America, since pre-Columbian times, to mark trails. Even today, in the Andes of South America, the Quechuan peoples build cairns as part of their spiritual and religious traditions.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "Cairn can be used to mark hiking trails, especially in mountain regions at or above the tree line. Examples can be seen in the lava fields of Volcanoes National Park to mark several hikes. Placed at regular intervals, a series of cairns can be used to indicate a path across stony or barren terrain, even across glaciers. In Acadia National Park, in Maine, the trails are marked by a special type of cairn instituted in the 1890s by Waldron Bates and dubbed Bates cairns.",
"title": "Modern cairns"
},
{
"paragraph_id": 19,
"text": "Coastal cairns called sea marks are also common in the northern latitudes, especially in the island-strewn waters of Scandinavia and eastern Canada. They are placed along shores and on islands and islets. Usually painted white for improved offshore visibility, they serve as navigation aids. In Sweden, they are called kummel, in Finland kummeli, in Norway varde, and are indicated in navigation charts and maintained as part of the nautical marking system.",
"title": "Modern cairns"
},
{
"paragraph_id": 20,
"text": "This article incorporates text from a publication now in the public domain: Chisholm, Hugh, ed. (1911). \"Cairn\". Encyclopædia Britannica (11th ed.). Cambridge University Press.",
"title": "References"
}
] | A cairn is a human-made pile of stones raised for a purpose, usually as a marker or as a burial mound. The word cairn comes from the Scottish Gaelic: càrn. Cairns have been and are used for a broad variety of purposes. In prehistory, they were raised as markers, as memorials and as burial monuments. In the modern era, cairns are often raised as landmarks, especially to mark the summits of mountains. Cairns are also used as trail markers. They vary in size from small stone markers to entire artificial hills, and in complexity from loose conical rock piles to elaborate megalithic structures. Cairns may be painted or otherwise decorated, whether for increased visibility or for religious reasons. A variant is the inuksuk, used by the Inuit and other peoples of the Arctic region of North America. | 2001-11-20T17:59:32Z | 2023-12-30T16:14:05Z | [
"Template:Lang",
"Template:Citation needed",
"Template:Clear",
"Template:Reflist",
"Template:Cite book",
"Template:Commons category",
"Template:IPA-gd",
"Template:About",
"Template:Annotated link",
"Template:Short description",
"Template:Div col",
"Template:Div col end",
"Template:Cite web",
"Template:Prehistoric technology",
"Template:More citations needed",
"Template:Lang-gd",
"Template:Clarify",
"Template:EB1911",
"Template:ISBN",
"Template:Cite journal",
"Template:Wikisource1911Enc",
"Template:Wiktionary",
"Template:Redirect"
] | https://en.wikipedia.org/wiki/Cairn |
7,198 | Characteristic subgroup | In mathematics, particularly in the area of abstract algebra known as group theory, a characteristic subgroup is a subgroup that is mapped to itself by every automorphism of the parent group. Because every conjugation map is an inner automorphism, every characteristic subgroup is normal; though the converse is not guaranteed. Examples of characteristic subgroups include the commutator subgroup and the center of a group.
A subgroup H of a group G is called a characteristic subgroup if for every automorphism φ of G, one has φ(H) ≤ H; then write H char G.
It would be equivalent to require the stronger condition φ(H) = H for every automorphism φ of G, because φ(H) ≤ H implies the reverse inclusion H ≤ φ(H).
Given H char G, every automorphism of G induces an automorphism of the quotient group G/H, which yields a homomorphism Aut(G) → Aut(G/H).
If G has a unique subgroup H of a given index, then H is characteristic in G.
A subgroup of H that is invariant under all inner automorphisms is called normal; also, an invariant subgroup.
Since Inn(G) ⊆ Aut(G) and a characteristic subgroup is invariant under all automorphisms, every characteristic subgroup is normal. However, not every normal subgroup is characteristic. Here are several examples:
A strictly characteristic subgroup, or a distinguished subgroup, which is invariant under surjective endomorphisms. For finite groups, surjectivity of an endomorphism implies injectivity, so a surjective endomorphism is an automorphism; thus being strictly characteristic is equivalent to characteristic. This is not the case anymore for infinite groups.
For an even stronger constraint, a fully characteristic subgroup (also, fully invariant subgroup; cf. invariant subgroup), H, of a group G, is a group remaining invariant under every endomorphism of G; that is,
Every group has itself (the improper subgroup) and the trivial subgroup as two of its fully characteristic subgroups. The commutator subgroup of a group is always a fully characteristic subgroup.
Every endomorphism of G induces an endomorphism of G/H, which yields a map End(G) → End(G/H).
An even stronger constraint is verbal subgroup, which is the image of a fully invariant subgroup of a free group under a homomorphism. More generally, any verbal subgroup is always fully characteristic. For any reduced free group, and, in particular, for any free group, the converse also holds: every fully characteristic subgroup is verbal.
The property of being characteristic or fully characteristic is transitive; if H is a (fully) characteristic subgroup of K, and K is a (fully) characteristic subgroup of G, then H is a (fully) characteristic subgroup of G.
Moreover, while normality is not transitive, it is true that every characteristic subgroup of a normal subgroup is normal.
Similarly, while being strictly characteristic (distinguished) is not transitive, it is true that every fully characteristic subgroup of a strictly characteristic subgroup is strictly characteristic.
However, unlike normality, if H char G and K is a subgroup of G containing H, then in general H is not necessarily characteristic in K.
Every subgroup that is fully characteristic is certainly strictly characteristic and characteristic; but a characteristic or even strictly characteristic subgroup need not be fully characteristic.
The center of a group is always a strictly characteristic subgroup, but it is not always fully characteristic. For example, the finite group of order 12, Sym(3) × Z / 2 Z {\displaystyle \mathbb {Z} /2\mathbb {Z} } , has a homomorphism taking (π, y) to ((1, 2), 0), which takes the center, 1 × Z / 2 Z {\displaystyle 1\times \mathbb {Z} /2\mathbb {Z} } , into a subgroup of Sym(3) × 1, which meets the center only in the identity.
The relationship amongst these subgroup properties can be expressed as:
Consider the group G = S3 × Z 2 {\displaystyle \mathbb {Z} _{2}} (the group of order 12 that is the direct product of the symmetric group of order 6 and a cyclic group of order 2). The center of G is isomorphic to its second factor Z 2 {\displaystyle \mathbb {Z} _{2}} . Note that the first factor, S3, contains subgroups isomorphic to Z 2 {\displaystyle \mathbb {Z} _{2}} , for instance {e, (12)} ; let f : Z 2 <→ S 3 {\displaystyle f:\mathbb {Z} _{2}<\rightarrow {\text{S}}_{3}} be the morphism mapping Z 2 {\displaystyle \mathbb {Z} _{2}} onto the indicated subgroup. Then the composition of the projection of G onto its second factor Z 2 {\displaystyle \mathbb {Z} _{2}} , followed by f, followed by the inclusion of S3 into G as its first factor, provides an endomorphism of G under which the image of the center, Z 2 {\displaystyle \mathbb {Z} _{2}} , is not contained in the center, so here the center is not a fully characteristic subgroup of G.
Every subgroup of a cyclic group is characteristic.
The derived subgroup (or commutator subgroup) of a group is a verbal subgroup. The torsion subgroup of an abelian group is a fully invariant subgroup.
The identity component of a topological group is always a characteristic subgroup. | [
{
"paragraph_id": 0,
"text": "In mathematics, particularly in the area of abstract algebra known as group theory, a characteristic subgroup is a subgroup that is mapped to itself by every automorphism of the parent group. Because every conjugation map is an inner automorphism, every characteristic subgroup is normal; though the converse is not guaranteed. Examples of characteristic subgroups include the commutator subgroup and the center of a group.",
"title": ""
},
{
"paragraph_id": 1,
"text": "A subgroup H of a group G is called a characteristic subgroup if for every automorphism φ of G, one has φ(H) ≤ H; then write H char G.",
"title": "Definition"
},
{
"paragraph_id": 2,
"text": "It would be equivalent to require the stronger condition φ(H) = H for every automorphism φ of G, because φ(H) ≤ H implies the reverse inclusion H ≤ φ(H).",
"title": "Definition"
},
{
"paragraph_id": 3,
"text": "Given H char G, every automorphism of G induces an automorphism of the quotient group G/H, which yields a homomorphism Aut(G) → Aut(G/H).",
"title": "Basic properties"
},
{
"paragraph_id": 4,
"text": "If G has a unique subgroup H of a given index, then H is characteristic in G.",
"title": "Basic properties"
},
{
"paragraph_id": 5,
"text": "A subgroup of H that is invariant under all inner automorphisms is called normal; also, an invariant subgroup.",
"title": "Related concepts"
},
{
"paragraph_id": 6,
"text": "Since Inn(G) ⊆ Aut(G) and a characteristic subgroup is invariant under all automorphisms, every characteristic subgroup is normal. However, not every normal subgroup is characteristic. Here are several examples:",
"title": "Related concepts"
},
{
"paragraph_id": 7,
"text": "A strictly characteristic subgroup, or a distinguished subgroup, which is invariant under surjective endomorphisms. For finite groups, surjectivity of an endomorphism implies injectivity, so a surjective endomorphism is an automorphism; thus being strictly characteristic is equivalent to characteristic. This is not the case anymore for infinite groups.",
"title": "Related concepts"
},
{
"paragraph_id": 8,
"text": "For an even stronger constraint, a fully characteristic subgroup (also, fully invariant subgroup; cf. invariant subgroup), H, of a group G, is a group remaining invariant under every endomorphism of G; that is,",
"title": "Related concepts"
},
{
"paragraph_id": 9,
"text": "Every group has itself (the improper subgroup) and the trivial subgroup as two of its fully characteristic subgroups. The commutator subgroup of a group is always a fully characteristic subgroup.",
"title": "Related concepts"
},
{
"paragraph_id": 10,
"text": "Every endomorphism of G induces an endomorphism of G/H, which yields a map End(G) → End(G/H).",
"title": "Related concepts"
},
{
"paragraph_id": 11,
"text": "An even stronger constraint is verbal subgroup, which is the image of a fully invariant subgroup of a free group under a homomorphism. More generally, any verbal subgroup is always fully characteristic. For any reduced free group, and, in particular, for any free group, the converse also holds: every fully characteristic subgroup is verbal.",
"title": "Related concepts"
},
{
"paragraph_id": 12,
"text": "The property of being characteristic or fully characteristic is transitive; if H is a (fully) characteristic subgroup of K, and K is a (fully) characteristic subgroup of G, then H is a (fully) characteristic subgroup of G.",
"title": "Transitivity"
},
{
"paragraph_id": 13,
"text": "Moreover, while normality is not transitive, it is true that every characteristic subgroup of a normal subgroup is normal.",
"title": "Transitivity"
},
{
"paragraph_id": 14,
"text": "Similarly, while being strictly characteristic (distinguished) is not transitive, it is true that every fully characteristic subgroup of a strictly characteristic subgroup is strictly characteristic.",
"title": "Transitivity"
},
{
"paragraph_id": 15,
"text": "However, unlike normality, if H char G and K is a subgroup of G containing H, then in general H is not necessarily characteristic in K.",
"title": "Transitivity"
},
{
"paragraph_id": 16,
"text": "Every subgroup that is fully characteristic is certainly strictly characteristic and characteristic; but a characteristic or even strictly characteristic subgroup need not be fully characteristic.",
"title": "Containments"
},
{
"paragraph_id": 17,
"text": "The center of a group is always a strictly characteristic subgroup, but it is not always fully characteristic. For example, the finite group of order 12, Sym(3) × Z / 2 Z {\\displaystyle \\mathbb {Z} /2\\mathbb {Z} } , has a homomorphism taking (π, y) to ((1, 2), 0), which takes the center, 1 × Z / 2 Z {\\displaystyle 1\\times \\mathbb {Z} /2\\mathbb {Z} } , into a subgroup of Sym(3) × 1, which meets the center only in the identity.",
"title": "Containments"
},
{
"paragraph_id": 18,
"text": "The relationship amongst these subgroup properties can be expressed as:",
"title": "Containments"
},
{
"paragraph_id": 19,
"text": "Consider the group G = S3 × Z 2 {\\displaystyle \\mathbb {Z} _{2}} (the group of order 12 that is the direct product of the symmetric group of order 6 and a cyclic group of order 2). The center of G is isomorphic to its second factor Z 2 {\\displaystyle \\mathbb {Z} _{2}} . Note that the first factor, S3, contains subgroups isomorphic to Z 2 {\\displaystyle \\mathbb {Z} _{2}} , for instance {e, (12)} ; let f : Z 2 <→ S 3 {\\displaystyle f:\\mathbb {Z} _{2}<\\rightarrow {\\text{S}}_{3}} be the morphism mapping Z 2 {\\displaystyle \\mathbb {Z} _{2}} onto the indicated subgroup. Then the composition of the projection of G onto its second factor Z 2 {\\displaystyle \\mathbb {Z} _{2}} , followed by f, followed by the inclusion of S3 into G as its first factor, provides an endomorphism of G under which the image of the center, Z 2 {\\displaystyle \\mathbb {Z} _{2}} , is not contained in the center, so here the center is not a fully characteristic subgroup of G.",
"title": "Examples"
},
{
"paragraph_id": 20,
"text": "Every subgroup of a cyclic group is characteristic.",
"title": "Examples"
},
{
"paragraph_id": 21,
"text": "The derived subgroup (or commutator subgroup) of a group is a verbal subgroup. The torsion subgroup of an abelian group is a fully invariant subgroup.",
"title": "Examples"
},
{
"paragraph_id": 22,
"text": "The identity component of a topological group is always a characteristic subgroup.",
"title": "Examples"
}
] | In mathematics, particularly in the area of abstract algebra known as group theory, a characteristic subgroup is a subgroup that is mapped to itself by every automorphism of the parent group. Because every conjugation map is an inner automorphism, every characteristic subgroup is normal; though the converse is not guaranteed. Examples of characteristic subgroups include the commutator subgroup and the center of a group. | 2022-06-29T16:31:44Z | [
"Template:Short description",
"Template:Math",
"Template:Main",
"Template:Anchor",
"Template:Vanchor",
"Template:Reflist",
"Template:Cite book"
] | https://en.wikipedia.org/wiki/Characteristic_subgroup |
|
7,199 | List of cat breeds | The following list of cat breeds includes only domestic cat breeds and domestic and wild hybrids. The list includes established breeds recognized by various cat registries, new and experimental breeds, landraces being established as standardized breeds, distinct domestic populations not being actively developed and lapsed (extinct) breeds.
As of 2023, The International Cat Association (TICA) recognizes 73 standardized breeds, the Cat Fanciers' Association (CFA) recognizes 45, the Fédération Internationale Féline (FIFe) recognizes 50, the Governing Council of the Cat Fancy (GCCF) recognizes 45, and the World Cat Federation (WCF) recognizes 69.
Inconsistency in a breed classification and naming among registries means that an individual animal may be considered different breeds by different registries (though not necessarily eligible for registry in them all, depending on its exact ancestry). For example, TICA's Himalayan is considered a colorpoint variety of the Persian by the CFA, while the Javanese (or Colorpoint Longhair) is a color variation of the Balinese in both the TICA and the CFA; both breeds are merged (along with the Colorpoint Shorthair) into a single "mega-breed", the Colourpoint, by the World Cat Federation (WCF), who have repurposed the name "Javanese" for the Oriental Longhair. Also, "Colo[u]rpoint Longhair" refers to different breeds in other registries. There are many examples of nomenclatural overlap and differences of this sort. Furthermore, many geographical and cultural names for cat breeds are fanciful selections made by Western breeders to be exotic sounding and bear no relationship to the actual origin of the breeds; the Balinese, Javanese, and Himalayan are all examples of this trend.
The domestic short-haired and domestic long-haired cat types are not breeds, but terms used (with various spellings) in the cat fancy to describe "mongrel" or "bicolor" cats by coat length, ones that do not belong to a particular breed. Some registries permit them to be pedigreed and they have been used as foundation stock in the establishment of some breeds. They should not be confused with standardized breeds with similar names, such as the British Shorthair and Oriental Longhair. | [
{
"paragraph_id": 0,
"text": "The following list of cat breeds includes only domestic cat breeds and domestic and wild hybrids. The list includes established breeds recognized by various cat registries, new and experimental breeds, landraces being established as standardized breeds, distinct domestic populations not being actively developed and lapsed (extinct) breeds.",
"title": ""
},
{
"paragraph_id": 1,
"text": "As of 2023, The International Cat Association (TICA) recognizes 73 standardized breeds, the Cat Fanciers' Association (CFA) recognizes 45, the Fédération Internationale Féline (FIFe) recognizes 50, the Governing Council of the Cat Fancy (GCCF) recognizes 45, and the World Cat Federation (WCF) recognizes 69.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Inconsistency in a breed classification and naming among registries means that an individual animal may be considered different breeds by different registries (though not necessarily eligible for registry in them all, depending on its exact ancestry). For example, TICA's Himalayan is considered a colorpoint variety of the Persian by the CFA, while the Javanese (or Colorpoint Longhair) is a color variation of the Balinese in both the TICA and the CFA; both breeds are merged (along with the Colorpoint Shorthair) into a single \"mega-breed\", the Colourpoint, by the World Cat Federation (WCF), who have repurposed the name \"Javanese\" for the Oriental Longhair. Also, \"Colo[u]rpoint Longhair\" refers to different breeds in other registries. There are many examples of nomenclatural overlap and differences of this sort. Furthermore, many geographical and cultural names for cat breeds are fanciful selections made by Western breeders to be exotic sounding and bear no relationship to the actual origin of the breeds; the Balinese, Javanese, and Himalayan are all examples of this trend.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The domestic short-haired and domestic long-haired cat types are not breeds, but terms used (with various spellings) in the cat fancy to describe \"mongrel\" or \"bicolor\" cats by coat length, ones that do not belong to a particular breed. Some registries permit them to be pedigreed and they have been used as foundation stock in the establishment of some breeds. They should not be confused with standardized breeds with similar names, such as the British Shorthair and Oriental Longhair.",
"title": ""
}
] | The following list of cat breeds includes only domestic cat breeds and domestic and wild hybrids. The list includes established breeds recognized by various cat registries, new and experimental breeds, landraces being established as standardized breeds, distinct domestic populations not being actively developed and lapsed (extinct) breeds. As of 2023, The International Cat Association (TICA) recognizes 73 standardized breeds, the Cat Fanciers' Association (CFA) recognizes 45, the Fédération Internationale Féline (FIFe) recognizes 50, the Governing Council of the Cat Fancy (GCCF) recognizes 45, and the World Cat Federation (WCF) recognizes 69. Inconsistency in a breed classification and naming among registries means that an individual animal may be considered different breeds by different registries. For example, TICA's Himalayan is considered a colorpoint variety of the Persian by the CFA, while the Javanese is a color variation of the Balinese in both the TICA and the CFA; both breeds are merged into a single "mega-breed", the Colourpoint, by the World Cat Federation (WCF), who have repurposed the name "Javanese" for the Oriental Longhair. Also, "Colo[u]rpoint Longhair" refers to different breeds in other registries. There are many examples of nomenclatural overlap and differences of this sort. Furthermore, many geographical and cultural names for cat breeds are fanciful selections made by Western breeders to be exotic sounding and bear no relationship to the actual origin of the breeds; the Balinese, Javanese, and Himalayan are all examples of this trend. The domestic short-haired and domestic long-haired cat types are not breeds, but terms used in the cat fancy to describe "mongrel" or "bicolor" cats by coat length, ones that do not belong to a particular breed. Some registries permit them to be pedigreed and they have been used as foundation stock in the establishment of some breeds. They should not be confused with standardized breeds with similar names, such as the British Shorthair and Oriental Longhair. | 2001-11-21T17:45:33Z | 2023-12-31T04:06:50Z | [
"Template:Crossreference",
"Template:Notelist",
"Template:Reflist",
"Template:Cite book",
"Template:Breed",
"Template:Short description",
"Template:Unclear style",
"Template:Citation needed",
"Template:Efn",
"Template:OR-section",
"Template:Cite web",
"Template:Cat nav",
"Template:Authority control"
] | https://en.wikipedia.org/wiki/List_of_cat_breeds |
7,200 | Class action | A class action, also known as a class-action lawsuit, class suit, or representative action, is a type of lawsuit where one of the parties is a group of people who are represented collectively by a member or members of that group. The class action originated in the United States and is still predominantly an American phenomenon, but Canada, as well as several European countries with civil law, have made changes in recent years to allow consumer organizations to bring claims on behalf of consumers.
In a typical class action, a plaintiff sues a defendant or a number of defendants on behalf of a group, or class, of absent parties. This differs from a traditional lawsuit, where one party sues another party, and all of the parties are present in court. Although standards differ between states and countries, class actions are most common where the allegations usually involve at least 40 people who the same defendant has injured in the same way. Instead of each damaged person bringing one's own lawsuit, the class action allows all the claims of all class members—whether they know they have been damaged or not—to be resolved in a single proceeding through the efforts of the representative plaintiff(s) and appointed class counsel.
The antecedent of the class action was what modern observers call "group litigation," which appears to have been quite common in medieval England from about 1200 onward. These lawsuits involved groups of people either suing or being sued in actions at common law. These groups were usually based on existing societal structures like villages, towns, parishes, and guilds. Unlike modern courts, the medieval English courts did not question the right of the actual plaintiffs to sue on behalf of a group or a few representatives to defend an entire group.
From 1400 to 1700, group litigation gradually switched from being the norm in England to the exception. The development of the concept of the corporation led to the wealthy supporters of the corporate form becoming suspicious of all unincorporated legal entities, which in turn led to the modern concept of the unincorporated or voluntary association. The tumultuous history of the Wars of the Roses and then the Star Chamber resulted in periods during which the common law courts were frequently paralyzed, and out of the confusion the Court of Chancery emerged with exclusive jurisdiction over group litigation.
By 1850, Parliament had enacted several statutes on a case-by-case basis to deal with issues regularly faced by certain types of organizations, like joint-stock companies, and with the impetus for most types of group litigation removed, it went into a steep decline in English jurisprudence from which it never recovered. It was further weakened by the fact that equity pleading, in general, was falling into disfavor, which culminated in the Judicature Acts of 1874 and 1875. Group litigation was essentially dead in the United Kingdom after 1850.
Class actions survived in the United States thanks to the influence of Supreme Court Associate Justice Joseph Story, who imported it into US law through summary discussions in his two equity treatises as well as his opinion in West v. Randall (1820). However, Story did not necessarily endorse class actions, because he "could not conceive of a modern function or a coherent theory for representative litigation."
The oldest predecessor to the class-action rule in the United States was in the Federal Equity Rules, specifically Equity Rule 48, promulgated in 1842.
Where the parties on either side are very numerous, and cannot, without manifest inconvenience and oppressive delays in the suit, be all brought before it, the court in its discretion may dispense with making all of them parties, and may proceed in the suit, having sufficient parties before it to represent all the adverse interests of the plaintiffs and the defendants in the suit properly before it. But in such cases, the decree shall be without prejudice to the rights and claims of all the absent parties.
This allowed for representative suits in situations where there were too many individual parties (which now forms the first requirement for class-action litigation – numerosity). However, this rule did not allow such suits to bind similarly situated absent parties, which rendered the rule ineffective. Within ten years, the Supreme Court interpreted Rule 48 in such a way so that it could apply to absent parties under certain circumstances, but only by ignoring the plain meaning of the rule. In the rules published in 1912, Equity Rule 48 was replaced with Equity Rule 38 as part of a major restructuring of the Equity Rules, and when federal courts merged their legal and equitable procedural systems in 1938, Equity Rule 38 became Rule 23 of the Federal Rules of Civil Procedure.
A major revision of the FRCP in 1966 radically transformed Rule 23, made the opt-out class action the standard option, and gave birth to the modern class action. Entire treatises have been written since to summarize the huge mass of law that sprang up from the 1966 revision of Rule 23. Just as medieval group litigation bound all members of the group regardless of whether they all actually appeared in court, the modern class action binds all members of the class, except for those who choose to opt out (if the rules permit them to do so).
The Advisory Committee that drafted the new Rule 23 in the mid-1960s was influenced by two major developments. First was the suggestion of Harry Kalven Jr. and Maurice Rosenfield in 1941 that class-action litigation by individual shareholders on behalf of all shareholders of a company could effectively supplement direct government regulation of securities markets and other similar markets. The second development was the rise of the civil rights movement, environmentalism and consumerism. The groups behind these movements, as well as many others in the 1960s, 1970s and 1980s, all turned to class actions as a means for achieving their goals. For example, a 1978 environmental law treatise reprinted the entire text of Rule 23 and mentioned "class actions" 14 times in its index.
Businesses targeted by class actions for inflicting massive aggregate harm have sought ways to avoid class actions altogether. In the 1990s, the US Supreme Court issued several decisions that strengthened the "federal policy favoring arbitration". In response, lawyers have added provisions to consumer contracts of adhesion called "collective action waivers", which prohibit those signing the contracts from bringing class-action suits. In 2011, the US Supreme Court ruled in a 5–4 decision in AT&T Mobility v. Concepcion that the Federal Arbitration Act of 1925 preempts state laws that prohibit contracts from disallowing class-action lawsuits, which will make it more difficult for consumers to file class-action lawsuits. The dissent pointed to a saving clause in the federal act which allowed states to determine how a contract or its clauses may be revoked.
In two major 21st-century cases, the Supreme Court ruled 5–4 against certification of class actions due to differences in each individual members' circumstances: first in Wal-Mart v. Dukes (2011) and later in Comcast Corp. v. Behrend (2013).
Companies may insert the phrase "may elect to resolve any claim by individual arbitration" into their consumer and employment contracts to use arbitration and prevent class-action lawsuits.
Rejecting arguments that they violated employees' rights to collective bargaining, and that modestly-valued consumer claims would be more efficiently litigated within the parameters of one lawsuit, the U.S. Supreme Court, in Epic Systems Corp. v. Lewis (2018), allowed the use of so-called "class action waivers". Citing its deference to freedom to contract principles, the Epic Systems opinion opened the door dramatically to the use of these waivers as a condition of employment, consumer purchases and the like. Some commentators in opposition to the ruling see it as a "death knell" to many employment and consumer class actions, and have increasingly pushed for legislation to circumvent it in hopes of reviving otherwise-underrepresented parties' ability to litigate on a group basis. Supporters (mostly pro-business) of the high court's ruling argue its holding is consistent with private contract principles. Many of those supporters had long-since argued that class action procedures were generally inconsistent with due process mandates and unnecessarily promoted litigation of otherwise small claims—thus heralding the ruling's anti-litigation effect.
In 2017, the US Supreme Court issued its opinion in Bristol-Meyer Squibb Co. v. Superior Court of California, 137 S. Ct. 1773 (2017), holding that over five hundred plaintiffs from other states cannot bring a consolidated mass action against the pharmaceutical giant in the State of California. This opinion may arguably render nationwide mass action and class action impossible in any single state besides the defendant's home state.
In 2020, the 11th Circuit Court of Appeals found incentive awards are impermissible. Incentive awards are a relatively modest payment made to class representatives as part of a class settlement. The ruling was a response to an objector who claimed Rule 23 required that the fee petition be filed before the time frame for class member objections to be filed; and payments to the class representative violates doctrine from two US Supreme Court cases from the 1800s.
As of 2010, there was no publicly maintained list of nonsecurities class-action settlements, although a securities class-action database exists in the Stanford Law School Securities Class Action Clearinghouse and several for-profit companies maintain lists of the securities settlements. One study of federal settlements required the researcher to manually search databases of lawsuits for the relevant records, although state class actions were not included due to the difficulty in gathering the information. Another source of data is US Bureau of Justice Statistics Civil Justice Survey of State Courts, which offers statistics for the year 2005.
Proponents of class actions state that they offer a number of advantages because they aggregate many individualized claims into one representational lawsuit.
First, aggregation can increase the efficiency of the legal process, and lower the costs of litigation. In cases with common questions of law and fact, aggregation of claims into a class action may avoid the necessity of repeating "days of the same witnesses, exhibits and issues from trial to trial". Jenkins v. Raymark Indus. Inc., 782 F.2d 468, 473 (5th Cir. 1986) (granting certification of a class action involving asbestos).
Second, a class action may overcome "the problem that small recoveries do not provide the incentive for any individual to bring a solo action prosecuting his or her rights". Amchem Prods., Inc. v. Windsor, 521 U.S. 591, 617 (1997) (quoting Mace v. Van Ru Credit Corp., 109 F.3d 388, 344 (7th Cir. 1997)). "A class action solves this problem by aggregating the relatively paltry potential recoveries into something worth someone's (usually an attorney's) labor." Amchem Prods., Inc., 521 U.S. at 617 (quoting Mace, 109 F.3d at 344). In other words, a class action ensures that a defendant who engages in widespread harm – but does so minimally against each individual plaintiff – must compensate those individuals for their injuries. For example, thousands of shareholders of a public company may have losses too small to justify separate lawsuits, but a class action can be brought efficiently on behalf of all shareholders. Perhaps even more important than compensation is that class treatment of claims may be the only way to impose the costs of wrongdoing on the wrongdoer, thus deterring future wrongdoing.
Third, class-action cases may be brought to purposely change behavior of a class of which the defendant is a member. Landeros v. Flood (1976) was a landmark case decided by the California Supreme Court that aimed at purposefully changing the behavior of doctors, encouraging them to report suspected child abuse. Otherwise, they would face the threat of civil action for damages in tort proximately flowing from the failure to report the suspected injuries. Previously, many physicians had remained reluctant to report cases of apparent child abuse, despite existing law that required it.
Fourth, in "limited fund" cases, a class action ensures that all plaintiffs receive relief and that early-filing plaintiffs do not raid the fund (i.e., the defendant) of all its assets before other plaintiffs may be compensated. See Ortiz v. Fibreboard Corp., 527 U.S. 815 (1999). A class action in such a situation centralizes all claims into one venue where a court can equitably divide the assets amongst all the plaintiffs if they win the case.
Finally, a class action avoids the situation where different court rulings could create "incompatible standards" of conduct for the defendant to follow. See Fed. R. Civ. P. 23(b)(1)(A). For example, a court might certify a case for class treatment where a number of individual bond-holders sue to determine whether they may convert their bonds to common stock. Refusing to litigate the case in one trial could result in different outcomes and inconsistent standards of conduct for the defendant corporation. Thus, courts will generally allow a class action in such a situation. See, e.g., Van Gemert v. Boeing Co., 259 F. Supp. 125 (S.D.N.Y. 1966).
Whether a class action is superior to individual litigation depends on the case and is determined by the judge's ruling on a motion for class certification. The Advisory Committee Note to Rule 23, for example, states that mass torts are ordinarily "not appropriate" for class treatment. Class treatment may not improve the efficiency of a mass tort because the claims frequently involve individualized issues of law and fact that will have to be re-tried on an individual basis. See Castano v. Am. Tobacco Co., 84 F.3d 734 (5th Cir. 1996) (rejecting nationwide class action against tobacco companies). Mass torts also involve high individual damage awards; thus, the absence of class treatment will not impede the ability of individual claimants to seek justice. Other cases, however, may be more conducive to class treatment.
The preamble to the Class Action Fairness Act of 2005, passed by the United States Congress, found:
Class-action lawsuits are an important and valuable part of the legal system when they permit the fair and efficient resolution of legitimate claims of numerous parties by allowing the claims to be aggregated into a single action against a defendant that has allegedly caused harm.
There are several criticisms of class actions. The preamble to the Class Action Fairness Act stated that some abusive class actions harmed class members with legitimate claims and defendants that have acted responsibly, adversely affected interstate commerce, and undermined public respect for the country's judicial system.
Class members often receive little or no benefit from class actions. Examples cited for this include large fees for the attorneys, while leaving class members with coupons or other awards of little or no value; unjustified awards are made to certain plaintiffs at the expense of other class members; and confusing notices are published that prevent class members from being able to fully understand and effectively exercise their rights.
For example, in the United States, class lawsuits sometimes bind all class members with a low settlement. These "coupon settlements" (which usually allow the plaintiffs to receive a small benefit such as a small check or a coupon for future services or products with the defendant company) are a way for a defendant to forestall major liability by precluding many people from litigating their claims separately, to recover reasonable compensation for the damages. However, existing law requires judicial approval of all class-action settlements, and in most cases, class members are given a chance to opt out of class settlement, though class members, despite opt-out notices, may be unaware of their right to opt-out because they did not receive the notice, did not read it or did not understand it.
The Class Action Fairness Act of 2005 addresses these concerns. An independent expert may scrutinize coupon settlements before judicial approval in order to ensure that the settlement will be of value to the class members (28 U.S.C.A. 1712(d)). Further, if the action provides for settlement in coupons, "the portion of any attorney's fee award to class counsel that is attributable to the award of the coupons shall be based on the value to class members of the coupons that are redeemed". 28 U.S.C.A. 1712(a).
Class action cases present significant ethical challenges. Defendants can hold reverse auctions and any of several parties can engage in collusive settlement discussions. Subclasses may have interests that diverge greatly from the class but may be treated the same. Proposed settlements could offer some groups (such as former customers) much greater benefits than others. In one paper presented at an ABA conference on class actions in 2007, authors commented that "competing cases can also provide opportunities for collusive settlement discussions and reverse auctions by defendants anxious to resolve their new exposure at the most economic cost".
Although normally plaintiffs are the class, defendant class actions are also possible. For example, in 2005, the Roman Catholic Archdiocese of Portland in Oregon was sued as part of the Catholic priest sex-abuse scandal. All parishioners of the Archdiocese's churches were cited as a defendant class. This was done to include their assets (local churches) in any settlement. Where both the plaintiffs and the defendants have been organized into court-approved classes, the action is called a bilateral class action.
In a class action, the plaintiff seeks court approval to litigate on behalf of a group of similarly situated persons. Not every plaintiff looks for or could obtain such approval. As a procedural alternative, plaintiff's counsel may attempt to sign up every similarly situated person that counsel can find as a client. Plaintiff's counsel can then join the claims of all of these persons in one complaint, a so-called "mass action", hoping to have the same efficiencies and economic leverage as if a class had been certified.
Because mass actions operate outside the detailed procedures laid out for class actions, they can pose special difficulties for both plaintiffs, defendants, and the court. For example, settlement of class actions follows a predictable path of negotiation with class counsel and representatives, court scrutiny, and notice. There may not be a way to uniformly settle all of the many claims brought via a mass action. Some states permit plaintiff's counsel to settle for all the mass action plaintiffs according to a majority vote, for example. Other states, such as New Jersey, require each plaintiff to approve the settlement of that plaintiff's own individual claims.
Class actions were recognized in "Halabi" leading case (Supreme Court, 2009).
Class actions became part of the Australian legal landscape only when the Federal Parliament amended the Federal Court of Australia Act ("the FCAA") in 1992 to introduce the "representative proceedings", the equivalent of the American "class actions".
Likewise, class actions appeared slowly in the New Zealand legal system. However, a group can bring litigation through the action of a representative under the High Court Rules which provide that one or a multitude of persons may sue on behalf of, or for the benefit of, all persons "with the same interest in the subject matter of a proceeding". The presence and expansion of litigation funders have been playing a significant role in the emergence of class actions in New Zealand. For example, the "Fair Play on Fees" proceedings in relation to penalty fees charged by banks were funded by Litigation Lending Services (LLS), a company specializing in the funding and management of litigation in Australia and New Zealand. It was the biggest class-action suit in New Zealand history.
The Austrian Code of Civil Procedure (Zivilprozessordnung – ZPO) does not provide for a special proceeding for complex class-action litigation. However, Austrian consumer organizations (Verein für Konsumenteninformation (VKI) and the Federal Chamber of Labour / Bundesarbeitskammer) have brought claims on behalf of hundreds or even thousands of consumers. In these cases, the individual consumers assigned their claims to one entity, who has then brought an ordinary (two-party) lawsuit over the assigned claims. The monetary benefits were redistributed among the class. This technique, labeled as "class action Austrian style," allows for a significant reduction of overall costs. The Austrian Supreme Court, in a judgment, confirmed the legal admissibility of these lawsuits under the condition that all claims are essentially based on the same grounds.
The Austrian Parliament unanimously requested the Austrian Federal Minister for Justice to examine the possibility of new legislation providing for a cost-effective and appropriate way to deal with mass claims. Together with the Austrian Ministry for Social Security, Generations and Consumer Protection, the Justice Ministry opened the discussion with a conference held in Vienna in June 2005. With the aid of a group of experts from many fields, the Justice Ministry began drafting the new law in September 2005. With the individual positions varying greatly, a political consensus could not be reached.
Provincial laws in Canada allow class actions. All provinces permit plaintiff classes and some permit defendant classes. Quebec was the first province to enact class proceedings legislation, in 1978. Ontario was next, with the Class Proceedings Act, 1992. As of 2008, 9 of 10 provinces had enacted comprehensive class actions legislation. In Prince Edward Island, where no comprehensive legislation exists, following the decision of the Supreme Court of Canada in Western Canadian Shopping Centres Inc. v. Dutton, [2001] 2 S.C.R. 534, class actions may be advanced under a local rule of court. The Federal Court of Canada permits class actions under Part V.1 of the Federal Courts Rules.
Legislation in Saskatchewan, Manitoba, Ontario, and Nova Scotia expressly or by judicial opinion has been read to allow for what are informally known as national "opt-out" class actions, whereby residents of other provinces may be included in the class definition and potentially be bound by the court's judgment on common issues unless they opt-out in a prescribed manner and time. Court rulings have determined that this permits a court in one province to include residents of other provinces in the class action on an "opt-out" basis.
Judicial opinions have indicated that provincial legislative national opt-out powers should not be exercised to interfere with the ability of another province to certify a parallel class action for residents of other provinces. The first court to certify will generally exclude residents of provinces whose courts have certified a parallel class action. However, in the Vioxx litigation, two provincial courts certified overlapping class actions whereby Canadian residents were class members in two class actions in two provinces. Both decisions are under appeal.
Other legislation may provide for representative actions on behalf of a large number of plaintiffs, independent of class action procedures. For instance, under Ontario's Condominium Act, a condominium's governing corporation may launch a lawsuit on behalf of the owners for damage to the condominium's common elements, even though the corporation does not own the common elements.
The largest class action suit in Canada was settled in 2005 after Nora Bernard initiated efforts that led to an estimated 79,000 survivors of Canada's residential school system suing the Canadian government. The settlement amounted to upwards of $5 billion.
Chile approved class actions in 2004. The Chilean model is technically an opt-out issue class action, followed by a compensatory stage which can be collective or individual. This means that the class action is designed to declare the defendant generally liable with erga omnes effects if and only if the defendant is found liable, and the declaratory judgment can be used then to pursue damages in the same procedure or in individual ones in different jurisdictions. If the latter is the case, the liability cannot be discussed, but only the damages. There under the Chilean procedural rules, one particular case works as an opt-out class action for damages. This is the case when defendants can identify and compensate consumers directly, i.e. because it is their banking institution. In such cases, the judge can skip the compensatory stage and order redress directly. Since 2005 more than 100 cases have been filed, mostly by Servicio Nacional del Consumidor [SERNAC], the Chilean consumer protection agency. Salient cases have been Condecus v. BancoEstado and SERNAC v. La Polar.
Under French law, an association can represent the collective interests of consumers; however, each claimant must be individually named in the lawsuit. On January 4, 2005, President Chirac urged changes that would provide greater consumer protection. A draft bill was proposed in April 2006 but did not pass.
Following the change of majority in France in 2012, the new government proposed introducing class actions into French law. The project of "loi Hamon" of May 2013 aimed to limit the class action to consumer and competition disputes. The law was passed on March 1, 2014.
Class actions are generally not permitted in Germany, as German law does not recognize the concept of a targeted class being affected by certain actions. This requires each plaintiff to individually prove that they were affected by an action, and present their individual damages, and prove the causality between both parties.
Joint litigation (Streitgenossenschaft) is a legal act that may permit plaintiffs that are in the same legal community with respect to the dispute, or are entitled by the same factual or legal reason. These are not typically regarded as class action suits, as each individual plaintiff is entitled to compensation for their individual, incurred damages and not as a result of being a member of a class.
The combination of court cases (Prozessverbindung) is another method that permits a judge to combine multiple separate court cases into a single trial with a single verdict. According to § 147 ZPO, this is only permissible if all cases are regarding the same factual and legal event and basis.
A genuine extension of the legal effect of a court decision beyond the parties involved in the proceedings is offered under corporate law. This procedure applies to the review of stock payoffs under Stock Corporation Act (Aktiengesetz. Pursuant to Sec. 13 Sentence 2 Mediation Procedure Act (Spruchverfahrensgesetz §), the court decision concerning the dismissal or direction of a binding arrangement of an adequate compensation is effective for and against all shareholders, including those who have already agreed to a previous settlement in this matter.
The Capital Investor Model Case Act (Kapitalanleger-Musterverfahrensgesetz) is an attempt to enable model cases to be brought by a large number of potentially affected parties in the event of disputes, limited to the investment market. In contrast to the US class actions, each affected party must file a lawsuit in its own name in order to participate in the model proceedings.
Effective on November 1, 2018, the Code of Civil Procedure (Zivilprozessordnung) introduced the Model Declaratory Action (§ 606 ZPO) that created the ability to bundle similar claims by many affected parties efficiently into one proceeding.
Registered Consumer Protection Associations can file – if they represent at least 10 individuals – for a (general) judicial finding whether the factual and legal requirements for of claims or legal relationships are met or not. These individuals have to register in order to inhibit their claims. Since these Adjudications are more of a general nature, each individual must assert their claims in their own court proceedings. The competent court is bound by the Model Declaratory Action decision.
German law also recognizes the Associative Action (Verbandsklage), which is comparable to the class action and is predominantly used in environmental law. In civil law, the Associative Action is represented by a foreign body in the matter of asserting and enforcing individual claims and the claimant can no longer control the proceedings.
Class actions can be brought by Germans in the US for events in Germany if the facts of the case relate to the US. For example, in the case of the Eschede train disaster, the lawsuit was allowed because several aggrieved parties came from the US and had purchased rail tickets there.
Decisions of the Indian Supreme Court in the 1980s loosened strict locus standi requirements to permit the filing of suits on behalf of rights of deprived sections of society by public-minded individuals or bodies. Although not strictly "class action litigation" as it is understood in American law, Public Interest Litigation arose out of the wide powers of judicial review granted to the Supreme Court of India and the various High Courts under Article 32 and Article 226 of the Constitution of India. The sort of remedies sought from courts in Public Interest Litigation go beyond mere award of damages to all affected groups, and have sometimes (controversially) gone on to include Court monitoring of the implementation of legislation and even the framing of guidelines in the absence of Parliamentary legislation.
However, this innovative jurisprudence did not help the victims of the Bhopal gas tragedy, who were unable to fully prosecute a class-action litigation (as understood in the American sense) against Union Carbide due to procedural rules that would make such litigation impossible to conclude and unwieldy to carry out. Instead, the Government of India exercised its right of parens patriae to appropriate all the claims of the victims and proceeded to litigate on their behalf, first in the New York courts and later, in the Indian courts. Ultimately, the matter was settled between the Union of India and Union Carbide (in a settlement overseen by the Supreme Court of India) for a sum of ₹760 crore (US$95 million) as a complete settlement of all claims of all victims for all time to come.
Public interest litigation has now broadened in scope to cover larger and larger groups of citizens who may be affected by government inaction. Examples of this trend include the conversion of all public transport in the city of Delhi from diesel engines to compressed natural gas engines on the basis of the orders of the Delhi High Court; the monitoring of forest use by the High Courts and the Supreme Court to ensure that there is no unjustified loss of forest cover; and the directions mandating the disclosure of assets of electoral candidates for the Houses of Parliament and State Assembly.
The Supreme Court has observed that the PIL has tended to become a means to gain publicity or obtain relief contrary to constitutionally valid legislation and policy. Observers point out that many High Courts and certain Supreme Court judges are reluctant to entertain PILs filed by non-governmental organizations and activists, citing concerns of separation of powers and parliamentary sovereignty.
In Irish law, there is no such thing as a "class action" per se. Third-party litigation funding is prohibited under Irish law. Instead, there is the 'representative action' (Irish: gníomh ionadaíoch) or 'test case' (cás samplach). A representative action is "where one claimant or defendant, with the same interest as a group of claimants or defendants in an action, institutes or defends proceedings on behalf of that group of claimants or defendants."
Some test cases in Ireland have included:
Italy has class action legislation. Consumer associations can file claims on behalf of groups of consumers to obtain judicial orders against corporations that cause injury or damage to consumers. These types of claims are increasing, and Italian courts have allowed them against banks that continue to apply compound interest on retail clients' current account overdrafts. The introduction of class actions is on the government's agenda. On November 19, 2007, the Senato della Repubblica passed a class-action law in Finanziaria 2008, a financial document for the economy management of the government. From 10 December 2007, in order of Italian legislation system, the law is before the House and has to be passed also by the Camera dei Deputati, the second house of Italian Parliament, to become an effective law. In 2004, the Italian parliament considered the introduction of a type of class action, specifically in the area of consumer law. No such law has been enacted, but scholars demonstrated that class actions (azioni rappresentative) do not contrast with Italian principles of civil procedure. Class action is regulated by art. 140 bis of the Italian consumers' code and has been in force since 1 July 2009. On May 19, 2021, the reform of the Italian legal framework on class actions finally entered into force. The new rules, designed by Law n. 31 and published on April 18, 2019, (Law n. 31/2019), were initially intended to become effective on April 19, 2020, but had been delayed twice. The new rules on class actions are now included in the Italian Civil Procedure Code (ICPC). Overall, the new class action appears to be a viable instrument which, through a system of economic incentives, could overcome the rational apathy of small-claims holders and ensure redress.
Dutch law allows associations (verenigingen) and foundations (stichtingen) to bring a so-called collective action on behalf of other persons, provided they can represent the interests of such persons according to their by-laws (statuten) (section 3:305a Dutch Civil Code). All types of actions are permitted. This includes a claim for monetary damages, provided the event occurred after 15 November 2016 (pursuant to new legislation which entered into force 1 January 2020). Most class actions over the past decade have been in the field of securities fraud and financial services. The acting association or foundation may come to a collective settlement with the defendant. The settlement may also include – and usually primarily consists of – monetary compensation of damages. Such settlement can be declared binding for all injured parties by the Amsterdam Court of Appeal (section 7:907 Dutch Civil Code). The injured parties have an opt-out right during the opt-out period set by the Court, usually 3 to 6 months. Settlements involving injured parties from outside The Netherlands can also be declared binding by the Court. Since US courts are reluctant to take up class actions brought on behalf of injured parties not residing in the US who have suffered damages due to acts or omissions committed outside the US, combinations of US class actions and Dutch collective actions may come to a settlement that covers plaintiffs worldwide. An example of this is the Royal Dutch Shell Oil Reserves Settlement that was declared binding upon both US and non-US plaintiffs.
"Pozew zbiorowy" or class action has been allowed under Polish law since July 19, 2010. A minimum of 10 persons, suing based on the same law, is required.
Collective litigation has been allowed under Russian law since 2002. Basic criteria are, like in the US, numerosity, commonality, and typicality.
Spanish law allows nominated consumer associations to take action to protect the interests of consumers. A number of groups already have the power to bring collective or class actions: certain consumer associations, bodies legally constituted to defend the "collective interest" and groups of injured parties.
Recent changes to Spanish civil procedure rules include the introduction of a quasi-class action right for certain consumer associations to claim damages on behalf of unidentified classes of consumers. The rules require consumer associations to represent an adequate number of affected parties who have suffered the same harm. Also, any judgment made by the Spanish court will list the individual beneficiaries or, if that is not possible, conditions that need to be fulfilled for a party to benefit from a judgment.
Swiss law does not allow for any form of class action. When the government proposed a new federal code of civil procedure in 2006, replacing the cantonal codes of civil procedure, it rejected the introduction of class actions, arguing that
[It] is alien to European legal thought to allow somebody to exercise rights on the behalf of a large number of people if these do not participate as parties in the action. ... Moreover, the class action is controversial even in its country of origin, the U.S., because it can result in significant procedural problems. ... Finally, the class action can be openly or discretely abused. The sums sued for are usually enormous, so that the respondent can be forced to concede, if they do not want to face sudden huge indebtness and insolvency (so-called legal blackmail).
The Civil Procedure Rules of the courts of England and Wales came into force in 1999 and have provided for representative actions in limited circumstances (under Part 19.6). These have not been much used, with only two reported cases at the court of first instance in the first ten years after the Civil Procedure Rules took effect. However, a sectoral mechanism was adopted by the Consumer Rights Act 2015, taking effect on October 1, 2015. Under the provisions therein, opt-in or opt-out collective procedures may be certified for breaches of competition law. This is currently the closest mechanism to a class action in England and Wales.
In the United States, the class representative, also called a lead plaintiff, named plaintiff, or representative plaintiff is the named party in a class-action lawsuit. Although the class representative is named as a party to the litigation, the court must approve the class representative when it certifies the lawsuit as a class action.
The class representative must be able to represent the interests of all the members of the class, by being typical of the class members and not having conflicts with them. He or she is responsible for hiring the attorney, filing the lawsuit, consulting on the case, and agreeing to any settlement. In exchange, the class representative may be entitled to compensation (at the court's discretion) out of the recovery amount.
In securities class actions that allege violations of Section 11 of the Securities Act of 1933, "officers and directors are liable together with the corporation for material misrepresentations in the registration statement."
To have "standing" to sue under Section 11 of the 1933 Act in a class action, a plaintiff must be able to prove that he can "trace" his shares to the registration statement in question, as to which there is alleged a material misstatement or omission. In the absence of an ability to actually trace his shares, such as when securities issued at multiple times are held by the Depository Trust Company in a fungible bulk and physical tracing of particular shares may be impossible, the plaintiff may be barred from pursuing his claim for lack of standing.
In federal courts, class actions are governed by Federal Rules of Civil Procedure Rule 23 and 28 U.S.C.A. § 1332(d). Cases in federal courts are only allowed to proceed as class actions if the court has jurisdiction to hear the case, and if the case meets the criteria set out in Rule 23. In the vast majority of federal class actions, the class is acting as the plaintiff. However, Rule 23 also provides for defendant class actions.
Typically, federal courts are thought to be more favorable for defendants, and state courts more favorable for plaintiffs. Many class actions are filed initially in state court. The defendant will frequently try to remove the case to federal court. The Class Action Fairness Act of 2005 increases defendants' ability to remove state cases to federal court by giving federal courts original jurisdiction for all class actions with damages exceeding $5,000,000 exclusive of interest and costs. The Class Action Fairness Act contains carve-outs for, among other things, shareholder class actions covered by the Private Securities Litigation Reform Act of 1995 and those concerning internal corporate governance issues (the latter typically being brought as shareholder derivative actions in the state courts of Delaware, the state of incorporation of most large corporations).
Class actions may be brought in federal court if the claim arises under federal law or if the claim falls under 28 U.S.C. § 1332(d). Under § 1332(d)(2) the federal district courts have original jurisdiction over any civil action where the amount in controversy exceeds $5,000,000 and
Nationwide plaintiff classes are possible, but such suits must have a commonality of issues across state lines. This may be difficult if the civil law in the various states lack significant commonalities. Large class actions brought in federal court frequently are consolidated for pre-trial purposes through the device of multidistrict litigation (MDL). It is also possible to bring class actions under state law, and in some cases the court may extend its jurisdiction to all the members of the class, including out of state (or even internationally) as the key element is the jurisdiction that the court has over the defendant.
For the case to proceed as a class action and bind absent class members, the court must certify the class under Rule 23 on a motion from the party wishing to proceed on a class basis. For a class to be certified, the moving party must meet all of the criteria listed under Rule 23(a), and at least one of the criteria listed under Rule 23(b).
The 23(a) criteria are referred to as numerosity, commonality, typicality, and adequacy. Numerosity refers to the number of people in the class. To be certified, the class has to have enough members that simply adding each of them as a named party to the lawsuit would be impractical. There is no bright-line rule to determine numerosity, but classes with hundreds of members are generally deemed to be sufficiently numerous. To satisfy commonality, there must be a common question of law and fact such that "determination of its truth or falsity will resolve an issue that is central to the validity of each one of the claims in one stroke". The typicality requirement ensures that the claims or defenses of the named plaintiff are typical of those of everyone else in the class. Finally, adequacy requirement states that the named plaintiff must fairly and adequately represent the interests of the absent class members.
Rule 23(b)(3) allows class certification if "questions of law or fact common to class members predominate over any questions affecting only individual members, and that a class action is superior to other available methods for fairly and efficiently adjudicating the controversy."
Due process requires in most cases that notice describing the class action be sent, published, or broadcast to class members. As part of this notice procedure, there may have to be several notices, first a notice allowing class members to opt out of the class, i.e. if individuals wish to proceed with their own litigation they are entitled to do so, only to the extent that they give timely notice to the class counsel or the court that they are opting out. Second, if there is a settlement proposal, the court will usually direct the class counsel to send a settlement notice to all the members of the certified class, informing them of the details of the proposed settlement.
Since 1938, many states have adopted rules similar to the FRCP. However, some states, like California, have civil procedure systems, which deviate significantly from the federal rules; the California Codes provide for four separate types of class actions. As a result, there are two separate treatises devoted solely to the complex topic of California class actions. Some states, such as Virginia, do not provide for any class actions, while others, such as New York, limit the types of claims that may be brought as class actions.
John Grisham's 2003 novel The King of Torts is a fable of the rights and wrongs of class actions. | [
{
"paragraph_id": 0,
"text": "A class action, also known as a class-action lawsuit, class suit, or representative action, is a type of lawsuit where one of the parties is a group of people who are represented collectively by a member or members of that group. The class action originated in the United States and is still predominantly an American phenomenon, but Canada, as well as several European countries with civil law, have made changes in recent years to allow consumer organizations to bring claims on behalf of consumers.",
"title": ""
},
{
"paragraph_id": 1,
"text": "In a typical class action, a plaintiff sues a defendant or a number of defendants on behalf of a group, or class, of absent parties. This differs from a traditional lawsuit, where one party sues another party, and all of the parties are present in court. Although standards differ between states and countries, class actions are most common where the allegations usually involve at least 40 people who the same defendant has injured in the same way. Instead of each damaged person bringing one's own lawsuit, the class action allows all the claims of all class members—whether they know they have been damaged or not—to be resolved in a single proceeding through the efforts of the representative plaintiff(s) and appointed class counsel.",
"title": "Description"
},
{
"paragraph_id": 2,
"text": "The antecedent of the class action was what modern observers call \"group litigation,\" which appears to have been quite common in medieval England from about 1200 onward. These lawsuits involved groups of people either suing or being sued in actions at common law. These groups were usually based on existing societal structures like villages, towns, parishes, and guilds. Unlike modern courts, the medieval English courts did not question the right of the actual plaintiffs to sue on behalf of a group or a few representatives to defend an entire group.",
"title": "History"
},
{
"paragraph_id": 3,
"text": "From 1400 to 1700, group litigation gradually switched from being the norm in England to the exception. The development of the concept of the corporation led to the wealthy supporters of the corporate form becoming suspicious of all unincorporated legal entities, which in turn led to the modern concept of the unincorporated or voluntary association. The tumultuous history of the Wars of the Roses and then the Star Chamber resulted in periods during which the common law courts were frequently paralyzed, and out of the confusion the Court of Chancery emerged with exclusive jurisdiction over group litigation.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "By 1850, Parliament had enacted several statutes on a case-by-case basis to deal with issues regularly faced by certain types of organizations, like joint-stock companies, and with the impetus for most types of group litigation removed, it went into a steep decline in English jurisprudence from which it never recovered. It was further weakened by the fact that equity pleading, in general, was falling into disfavor, which culminated in the Judicature Acts of 1874 and 1875. Group litigation was essentially dead in the United Kingdom after 1850.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "Class actions survived in the United States thanks to the influence of Supreme Court Associate Justice Joseph Story, who imported it into US law through summary discussions in his two equity treatises as well as his opinion in West v. Randall (1820). However, Story did not necessarily endorse class actions, because he \"could not conceive of a modern function or a coherent theory for representative litigation.\"",
"title": "History"
},
{
"paragraph_id": 6,
"text": "The oldest predecessor to the class-action rule in the United States was in the Federal Equity Rules, specifically Equity Rule 48, promulgated in 1842.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "Where the parties on either side are very numerous, and cannot, without manifest inconvenience and oppressive delays in the suit, be all brought before it, the court in its discretion may dispense with making all of them parties, and may proceed in the suit, having sufficient parties before it to represent all the adverse interests of the plaintiffs and the defendants in the suit properly before it. But in such cases, the decree shall be without prejudice to the rights and claims of all the absent parties.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "This allowed for representative suits in situations where there were too many individual parties (which now forms the first requirement for class-action litigation – numerosity). However, this rule did not allow such suits to bind similarly situated absent parties, which rendered the rule ineffective. Within ten years, the Supreme Court interpreted Rule 48 in such a way so that it could apply to absent parties under certain circumstances, but only by ignoring the plain meaning of the rule. In the rules published in 1912, Equity Rule 48 was replaced with Equity Rule 38 as part of a major restructuring of the Equity Rules, and when federal courts merged their legal and equitable procedural systems in 1938, Equity Rule 38 became Rule 23 of the Federal Rules of Civil Procedure.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "A major revision of the FRCP in 1966 radically transformed Rule 23, made the opt-out class action the standard option, and gave birth to the modern class action. Entire treatises have been written since to summarize the huge mass of law that sprang up from the 1966 revision of Rule 23. Just as medieval group litigation bound all members of the group regardless of whether they all actually appeared in court, the modern class action binds all members of the class, except for those who choose to opt out (if the rules permit them to do so).",
"title": "History"
},
{
"paragraph_id": 10,
"text": "The Advisory Committee that drafted the new Rule 23 in the mid-1960s was influenced by two major developments. First was the suggestion of Harry Kalven Jr. and Maurice Rosenfield in 1941 that class-action litigation by individual shareholders on behalf of all shareholders of a company could effectively supplement direct government regulation of securities markets and other similar markets. The second development was the rise of the civil rights movement, environmentalism and consumerism. The groups behind these movements, as well as many others in the 1960s, 1970s and 1980s, all turned to class actions as a means for achieving their goals. For example, a 1978 environmental law treatise reprinted the entire text of Rule 23 and mentioned \"class actions\" 14 times in its index.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "Businesses targeted by class actions for inflicting massive aggregate harm have sought ways to avoid class actions altogether. In the 1990s, the US Supreme Court issued several decisions that strengthened the \"federal policy favoring arbitration\". In response, lawyers have added provisions to consumer contracts of adhesion called \"collective action waivers\", which prohibit those signing the contracts from bringing class-action suits. In 2011, the US Supreme Court ruled in a 5–4 decision in AT&T Mobility v. Concepcion that the Federal Arbitration Act of 1925 preempts state laws that prohibit contracts from disallowing class-action lawsuits, which will make it more difficult for consumers to file class-action lawsuits. The dissent pointed to a saving clause in the federal act which allowed states to determine how a contract or its clauses may be revoked.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "In two major 21st-century cases, the Supreme Court ruled 5–4 against certification of class actions due to differences in each individual members' circumstances: first in Wal-Mart v. Dukes (2011) and later in Comcast Corp. v. Behrend (2013).",
"title": "History"
},
{
"paragraph_id": 13,
"text": "Companies may insert the phrase \"may elect to resolve any claim by individual arbitration\" into their consumer and employment contracts to use arbitration and prevent class-action lawsuits.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "Rejecting arguments that they violated employees' rights to collective bargaining, and that modestly-valued consumer claims would be more efficiently litigated within the parameters of one lawsuit, the U.S. Supreme Court, in Epic Systems Corp. v. Lewis (2018), allowed the use of so-called \"class action waivers\". Citing its deference to freedom to contract principles, the Epic Systems opinion opened the door dramatically to the use of these waivers as a condition of employment, consumer purchases and the like. Some commentators in opposition to the ruling see it as a \"death knell\" to many employment and consumer class actions, and have increasingly pushed for legislation to circumvent it in hopes of reviving otherwise-underrepresented parties' ability to litigate on a group basis. Supporters (mostly pro-business) of the high court's ruling argue its holding is consistent with private contract principles. Many of those supporters had long-since argued that class action procedures were generally inconsistent with due process mandates and unnecessarily promoted litigation of otherwise small claims—thus heralding the ruling's anti-litigation effect.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "In 2017, the US Supreme Court issued its opinion in Bristol-Meyer Squibb Co. v. Superior Court of California, 137 S. Ct. 1773 (2017), holding that over five hundred plaintiffs from other states cannot bring a consolidated mass action against the pharmaceutical giant in the State of California. This opinion may arguably render nationwide mass action and class action impossible in any single state besides the defendant's home state.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "In 2020, the 11th Circuit Court of Appeals found incentive awards are impermissible. Incentive awards are a relatively modest payment made to class representatives as part of a class settlement. The ruling was a response to an objector who claimed Rule 23 required that the fee petition be filed before the time frame for class member objections to be filed; and payments to the class representative violates doctrine from two US Supreme Court cases from the 1800s.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "As of 2010, there was no publicly maintained list of nonsecurities class-action settlements, although a securities class-action database exists in the Stanford Law School Securities Class Action Clearinghouse and several for-profit companies maintain lists of the securities settlements. One study of federal settlements required the researcher to manually search databases of lawsuits for the relevant records, although state class actions were not included due to the difficulty in gathering the information. Another source of data is US Bureau of Justice Statistics Civil Justice Survey of State Courts, which offers statistics for the year 2005.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "Proponents of class actions state that they offer a number of advantages because they aggregate many individualized claims into one representational lawsuit.",
"title": "Advantages"
},
{
"paragraph_id": 19,
"text": "First, aggregation can increase the efficiency of the legal process, and lower the costs of litigation. In cases with common questions of law and fact, aggregation of claims into a class action may avoid the necessity of repeating \"days of the same witnesses, exhibits and issues from trial to trial\". Jenkins v. Raymark Indus. Inc., 782 F.2d 468, 473 (5th Cir. 1986) (granting certification of a class action involving asbestos).",
"title": "Advantages"
},
{
"paragraph_id": 20,
"text": "Second, a class action may overcome \"the problem that small recoveries do not provide the incentive for any individual to bring a solo action prosecuting his or her rights\". Amchem Prods., Inc. v. Windsor, 521 U.S. 591, 617 (1997) (quoting Mace v. Van Ru Credit Corp., 109 F.3d 388, 344 (7th Cir. 1997)). \"A class action solves this problem by aggregating the relatively paltry potential recoveries into something worth someone's (usually an attorney's) labor.\" Amchem Prods., Inc., 521 U.S. at 617 (quoting Mace, 109 F.3d at 344). In other words, a class action ensures that a defendant who engages in widespread harm – but does so minimally against each individual plaintiff – must compensate those individuals for their injuries. For example, thousands of shareholders of a public company may have losses too small to justify separate lawsuits, but a class action can be brought efficiently on behalf of all shareholders. Perhaps even more important than compensation is that class treatment of claims may be the only way to impose the costs of wrongdoing on the wrongdoer, thus deterring future wrongdoing.",
"title": "Advantages"
},
{
"paragraph_id": 21,
"text": "Third, class-action cases may be brought to purposely change behavior of a class of which the defendant is a member. Landeros v. Flood (1976) was a landmark case decided by the California Supreme Court that aimed at purposefully changing the behavior of doctors, encouraging them to report suspected child abuse. Otherwise, they would face the threat of civil action for damages in tort proximately flowing from the failure to report the suspected injuries. Previously, many physicians had remained reluctant to report cases of apparent child abuse, despite existing law that required it.",
"title": "Advantages"
},
{
"paragraph_id": 22,
"text": "Fourth, in \"limited fund\" cases, a class action ensures that all plaintiffs receive relief and that early-filing plaintiffs do not raid the fund (i.e., the defendant) of all its assets before other plaintiffs may be compensated. See Ortiz v. Fibreboard Corp., 527 U.S. 815 (1999). A class action in such a situation centralizes all claims into one venue where a court can equitably divide the assets amongst all the plaintiffs if they win the case.",
"title": "Advantages"
},
{
"paragraph_id": 23,
"text": "Finally, a class action avoids the situation where different court rulings could create \"incompatible standards\" of conduct for the defendant to follow. See Fed. R. Civ. P. 23(b)(1)(A). For example, a court might certify a case for class treatment where a number of individual bond-holders sue to determine whether they may convert their bonds to common stock. Refusing to litigate the case in one trial could result in different outcomes and inconsistent standards of conduct for the defendant corporation. Thus, courts will generally allow a class action in such a situation. See, e.g., Van Gemert v. Boeing Co., 259 F. Supp. 125 (S.D.N.Y. 1966).",
"title": "Advantages"
},
{
"paragraph_id": 24,
"text": "Whether a class action is superior to individual litigation depends on the case and is determined by the judge's ruling on a motion for class certification. The Advisory Committee Note to Rule 23, for example, states that mass torts are ordinarily \"not appropriate\" for class treatment. Class treatment may not improve the efficiency of a mass tort because the claims frequently involve individualized issues of law and fact that will have to be re-tried on an individual basis. See Castano v. Am. Tobacco Co., 84 F.3d 734 (5th Cir. 1996) (rejecting nationwide class action against tobacco companies). Mass torts also involve high individual damage awards; thus, the absence of class treatment will not impede the ability of individual claimants to seek justice. Other cases, however, may be more conducive to class treatment.",
"title": "Advantages"
},
{
"paragraph_id": 25,
"text": "The preamble to the Class Action Fairness Act of 2005, passed by the United States Congress, found:",
"title": "Advantages"
},
{
"paragraph_id": 26,
"text": "Class-action lawsuits are an important and valuable part of the legal system when they permit the fair and efficient resolution of legitimate claims of numerous parties by allowing the claims to be aggregated into a single action against a defendant that has allegedly caused harm.",
"title": "Advantages"
},
{
"paragraph_id": 27,
"text": "There are several criticisms of class actions. The preamble to the Class Action Fairness Act stated that some abusive class actions harmed class members with legitimate claims and defendants that have acted responsibly, adversely affected interstate commerce, and undermined public respect for the country's judicial system.",
"title": "Criticisms"
},
{
"paragraph_id": 28,
"text": "Class members often receive little or no benefit from class actions. Examples cited for this include large fees for the attorneys, while leaving class members with coupons or other awards of little or no value; unjustified awards are made to certain plaintiffs at the expense of other class members; and confusing notices are published that prevent class members from being able to fully understand and effectively exercise their rights.",
"title": "Criticisms"
},
{
"paragraph_id": 29,
"text": "For example, in the United States, class lawsuits sometimes bind all class members with a low settlement. These \"coupon settlements\" (which usually allow the plaintiffs to receive a small benefit such as a small check or a coupon for future services or products with the defendant company) are a way for a defendant to forestall major liability by precluding many people from litigating their claims separately, to recover reasonable compensation for the damages. However, existing law requires judicial approval of all class-action settlements, and in most cases, class members are given a chance to opt out of class settlement, though class members, despite opt-out notices, may be unaware of their right to opt-out because they did not receive the notice, did not read it or did not understand it.",
"title": "Criticisms"
},
{
"paragraph_id": 30,
"text": "The Class Action Fairness Act of 2005 addresses these concerns. An independent expert may scrutinize coupon settlements before judicial approval in order to ensure that the settlement will be of value to the class members (28 U.S.C.A. 1712(d)). Further, if the action provides for settlement in coupons, \"the portion of any attorney's fee award to class counsel that is attributable to the award of the coupons shall be based on the value to class members of the coupons that are redeemed\". 28 U.S.C.A. 1712(a).",
"title": "Criticisms"
},
{
"paragraph_id": 31,
"text": "Class action cases present significant ethical challenges. Defendants can hold reverse auctions and any of several parties can engage in collusive settlement discussions. Subclasses may have interests that diverge greatly from the class but may be treated the same. Proposed settlements could offer some groups (such as former customers) much greater benefits than others. In one paper presented at an ABA conference on class actions in 2007, authors commented that \"competing cases can also provide opportunities for collusive settlement discussions and reverse auctions by defendants anxious to resolve their new exposure at the most economic cost\".",
"title": "Ethics"
},
{
"paragraph_id": 32,
"text": "Although normally plaintiffs are the class, defendant class actions are also possible. For example, in 2005, the Roman Catholic Archdiocese of Portland in Oregon was sued as part of the Catholic priest sex-abuse scandal. All parishioners of the Archdiocese's churches were cited as a defendant class. This was done to include their assets (local churches) in any settlement. Where both the plaintiffs and the defendants have been organized into court-approved classes, the action is called a bilateral class action.",
"title": "Defendant class action"
},
{
"paragraph_id": 33,
"text": "In a class action, the plaintiff seeks court approval to litigate on behalf of a group of similarly situated persons. Not every plaintiff looks for or could obtain such approval. As a procedural alternative, plaintiff's counsel may attempt to sign up every similarly situated person that counsel can find as a client. Plaintiff's counsel can then join the claims of all of these persons in one complaint, a so-called \"mass action\", hoping to have the same efficiencies and economic leverage as if a class had been certified.",
"title": "Mass actions"
},
{
"paragraph_id": 34,
"text": "Because mass actions operate outside the detailed procedures laid out for class actions, they can pose special difficulties for both plaintiffs, defendants, and the court. For example, settlement of class actions follows a predictable path of negotiation with class counsel and representatives, court scrutiny, and notice. There may not be a way to uniformly settle all of the many claims brought via a mass action. Some states permit plaintiff's counsel to settle for all the mass action plaintiffs according to a majority vote, for example. Other states, such as New Jersey, require each plaintiff to approve the settlement of that plaintiff's own individual claims.",
"title": "Mass actions"
},
{
"paragraph_id": 35,
"text": "Class actions were recognized in \"Halabi\" leading case (Supreme Court, 2009).",
"title": "Class action legislation"
},
{
"paragraph_id": 36,
"text": "Class actions became part of the Australian legal landscape only when the Federal Parliament amended the Federal Court of Australia Act (\"the FCAA\") in 1992 to introduce the \"representative proceedings\", the equivalent of the American \"class actions\".",
"title": "Class action legislation"
},
{
"paragraph_id": 37,
"text": "Likewise, class actions appeared slowly in the New Zealand legal system. However, a group can bring litigation through the action of a representative under the High Court Rules which provide that one or a multitude of persons may sue on behalf of, or for the benefit of, all persons \"with the same interest in the subject matter of a proceeding\". The presence and expansion of litigation funders have been playing a significant role in the emergence of class actions in New Zealand. For example, the \"Fair Play on Fees\" proceedings in relation to penalty fees charged by banks were funded by Litigation Lending Services (LLS), a company specializing in the funding and management of litigation in Australia and New Zealand. It was the biggest class-action suit in New Zealand history.",
"title": "Class action legislation"
},
{
"paragraph_id": 38,
"text": "The Austrian Code of Civil Procedure (Zivilprozessordnung – ZPO) does not provide for a special proceeding for complex class-action litigation. However, Austrian consumer organizations (Verein für Konsumenteninformation (VKI) and the Federal Chamber of Labour / Bundesarbeitskammer) have brought claims on behalf of hundreds or even thousands of consumers. In these cases, the individual consumers assigned their claims to one entity, who has then brought an ordinary (two-party) lawsuit over the assigned claims. The monetary benefits were redistributed among the class. This technique, labeled as \"class action Austrian style,\" allows for a significant reduction of overall costs. The Austrian Supreme Court, in a judgment, confirmed the legal admissibility of these lawsuits under the condition that all claims are essentially based on the same grounds.",
"title": "Class action legislation"
},
{
"paragraph_id": 39,
"text": "The Austrian Parliament unanimously requested the Austrian Federal Minister for Justice to examine the possibility of new legislation providing for a cost-effective and appropriate way to deal with mass claims. Together with the Austrian Ministry for Social Security, Generations and Consumer Protection, the Justice Ministry opened the discussion with a conference held in Vienna in June 2005. With the aid of a group of experts from many fields, the Justice Ministry began drafting the new law in September 2005. With the individual positions varying greatly, a political consensus could not be reached.",
"title": "Class action legislation"
},
{
"paragraph_id": 40,
"text": "Provincial laws in Canada allow class actions. All provinces permit plaintiff classes and some permit defendant classes. Quebec was the first province to enact class proceedings legislation, in 1978. Ontario was next, with the Class Proceedings Act, 1992. As of 2008, 9 of 10 provinces had enacted comprehensive class actions legislation. In Prince Edward Island, where no comprehensive legislation exists, following the decision of the Supreme Court of Canada in Western Canadian Shopping Centres Inc. v. Dutton, [2001] 2 S.C.R. 534, class actions may be advanced under a local rule of court. The Federal Court of Canada permits class actions under Part V.1 of the Federal Courts Rules.",
"title": "Class action legislation"
},
{
"paragraph_id": 41,
"text": "Legislation in Saskatchewan, Manitoba, Ontario, and Nova Scotia expressly or by judicial opinion has been read to allow for what are informally known as national \"opt-out\" class actions, whereby residents of other provinces may be included in the class definition and potentially be bound by the court's judgment on common issues unless they opt-out in a prescribed manner and time. Court rulings have determined that this permits a court in one province to include residents of other provinces in the class action on an \"opt-out\" basis.",
"title": "Class action legislation"
},
{
"paragraph_id": 42,
"text": "Judicial opinions have indicated that provincial legislative national opt-out powers should not be exercised to interfere with the ability of another province to certify a parallel class action for residents of other provinces. The first court to certify will generally exclude residents of provinces whose courts have certified a parallel class action. However, in the Vioxx litigation, two provincial courts certified overlapping class actions whereby Canadian residents were class members in two class actions in two provinces. Both decisions are under appeal.",
"title": "Class action legislation"
},
{
"paragraph_id": 43,
"text": "Other legislation may provide for representative actions on behalf of a large number of plaintiffs, independent of class action procedures. For instance, under Ontario's Condominium Act, a condominium's governing corporation may launch a lawsuit on behalf of the owners for damage to the condominium's common elements, even though the corporation does not own the common elements.",
"title": "Class action legislation"
},
{
"paragraph_id": 44,
"text": "The largest class action suit in Canada was settled in 2005 after Nora Bernard initiated efforts that led to an estimated 79,000 survivors of Canada's residential school system suing the Canadian government. The settlement amounted to upwards of $5 billion.",
"title": "Class action legislation"
},
{
"paragraph_id": 45,
"text": "Chile approved class actions in 2004. The Chilean model is technically an opt-out issue class action, followed by a compensatory stage which can be collective or individual. This means that the class action is designed to declare the defendant generally liable with erga omnes effects if and only if the defendant is found liable, and the declaratory judgment can be used then to pursue damages in the same procedure or in individual ones in different jurisdictions. If the latter is the case, the liability cannot be discussed, but only the damages. There under the Chilean procedural rules, one particular case works as an opt-out class action for damages. This is the case when defendants can identify and compensate consumers directly, i.e. because it is their banking institution. In such cases, the judge can skip the compensatory stage and order redress directly. Since 2005 more than 100 cases have been filed, mostly by Servicio Nacional del Consumidor [SERNAC], the Chilean consumer protection agency. Salient cases have been Condecus v. BancoEstado and SERNAC v. La Polar.",
"title": "Class action legislation"
},
{
"paragraph_id": 46,
"text": "Under French law, an association can represent the collective interests of consumers; however, each claimant must be individually named in the lawsuit. On January 4, 2005, President Chirac urged changes that would provide greater consumer protection. A draft bill was proposed in April 2006 but did not pass.",
"title": "Class action legislation"
},
{
"paragraph_id": 47,
"text": "Following the change of majority in France in 2012, the new government proposed introducing class actions into French law. The project of \"loi Hamon\" of May 2013 aimed to limit the class action to consumer and competition disputes. The law was passed on March 1, 2014.",
"title": "Class action legislation"
},
{
"paragraph_id": 48,
"text": "Class actions are generally not permitted in Germany, as German law does not recognize the concept of a targeted class being affected by certain actions. This requires each plaintiff to individually prove that they were affected by an action, and present their individual damages, and prove the causality between both parties.",
"title": "Class action legislation"
},
{
"paragraph_id": 49,
"text": "Joint litigation (Streitgenossenschaft) is a legal act that may permit plaintiffs that are in the same legal community with respect to the dispute, or are entitled by the same factual or legal reason. These are not typically regarded as class action suits, as each individual plaintiff is entitled to compensation for their individual, incurred damages and not as a result of being a member of a class.",
"title": "Class action legislation"
},
{
"paragraph_id": 50,
"text": "The combination of court cases (Prozessverbindung) is another method that permits a judge to combine multiple separate court cases into a single trial with a single verdict. According to § 147 ZPO, this is only permissible if all cases are regarding the same factual and legal event and basis.",
"title": "Class action legislation"
},
{
"paragraph_id": 51,
"text": "A genuine extension of the legal effect of a court decision beyond the parties involved in the proceedings is offered under corporate law. This procedure applies to the review of stock payoffs under Stock Corporation Act (Aktiengesetz. Pursuant to Sec. 13 Sentence 2 Mediation Procedure Act (Spruchverfahrensgesetz §), the court decision concerning the dismissal or direction of a binding arrangement of an adequate compensation is effective for and against all shareholders, including those who have already agreed to a previous settlement in this matter.",
"title": "Class action legislation"
},
{
"paragraph_id": 52,
"text": "The Capital Investor Model Case Act (Kapitalanleger-Musterverfahrensgesetz) is an attempt to enable model cases to be brought by a large number of potentially affected parties in the event of disputes, limited to the investment market. In contrast to the US class actions, each affected party must file a lawsuit in its own name in order to participate in the model proceedings.",
"title": "Class action legislation"
},
{
"paragraph_id": 53,
"text": "Effective on November 1, 2018, the Code of Civil Procedure (Zivilprozessordnung) introduced the Model Declaratory Action (§ 606 ZPO) that created the ability to bundle similar claims by many affected parties efficiently into one proceeding.",
"title": "Class action legislation"
},
{
"paragraph_id": 54,
"text": "Registered Consumer Protection Associations can file – if they represent at least 10 individuals – for a (general) judicial finding whether the factual and legal requirements for of claims or legal relationships are met or not. These individuals have to register in order to inhibit their claims. Since these Adjudications are more of a general nature, each individual must assert their claims in their own court proceedings. The competent court is bound by the Model Declaratory Action decision.",
"title": "Class action legislation"
},
{
"paragraph_id": 55,
"text": "German law also recognizes the Associative Action (Verbandsklage), which is comparable to the class action and is predominantly used in environmental law. In civil law, the Associative Action is represented by a foreign body in the matter of asserting and enforcing individual claims and the claimant can no longer control the proceedings.",
"title": "Class action legislation"
},
{
"paragraph_id": 56,
"text": "Class actions can be brought by Germans in the US for events in Germany if the facts of the case relate to the US. For example, in the case of the Eschede train disaster, the lawsuit was allowed because several aggrieved parties came from the US and had purchased rail tickets there.",
"title": "Class action legislation"
},
{
"paragraph_id": 57,
"text": "Decisions of the Indian Supreme Court in the 1980s loosened strict locus standi requirements to permit the filing of suits on behalf of rights of deprived sections of society by public-minded individuals or bodies. Although not strictly \"class action litigation\" as it is understood in American law, Public Interest Litigation arose out of the wide powers of judicial review granted to the Supreme Court of India and the various High Courts under Article 32 and Article 226 of the Constitution of India. The sort of remedies sought from courts in Public Interest Litigation go beyond mere award of damages to all affected groups, and have sometimes (controversially) gone on to include Court monitoring of the implementation of legislation and even the framing of guidelines in the absence of Parliamentary legislation.",
"title": "Class action legislation"
},
{
"paragraph_id": 58,
"text": "However, this innovative jurisprudence did not help the victims of the Bhopal gas tragedy, who were unable to fully prosecute a class-action litigation (as understood in the American sense) against Union Carbide due to procedural rules that would make such litigation impossible to conclude and unwieldy to carry out. Instead, the Government of India exercised its right of parens patriae to appropriate all the claims of the victims and proceeded to litigate on their behalf, first in the New York courts and later, in the Indian courts. Ultimately, the matter was settled between the Union of India and Union Carbide (in a settlement overseen by the Supreme Court of India) for a sum of ₹760 crore (US$95 million) as a complete settlement of all claims of all victims for all time to come.",
"title": "Class action legislation"
},
{
"paragraph_id": 59,
"text": "Public interest litigation has now broadened in scope to cover larger and larger groups of citizens who may be affected by government inaction. Examples of this trend include the conversion of all public transport in the city of Delhi from diesel engines to compressed natural gas engines on the basis of the orders of the Delhi High Court; the monitoring of forest use by the High Courts and the Supreme Court to ensure that there is no unjustified loss of forest cover; and the directions mandating the disclosure of assets of electoral candidates for the Houses of Parliament and State Assembly.",
"title": "Class action legislation"
},
{
"paragraph_id": 60,
"text": "The Supreme Court has observed that the PIL has tended to become a means to gain publicity or obtain relief contrary to constitutionally valid legislation and policy. Observers point out that many High Courts and certain Supreme Court judges are reluctant to entertain PILs filed by non-governmental organizations and activists, citing concerns of separation of powers and parliamentary sovereignty.",
"title": "Class action legislation"
},
{
"paragraph_id": 61,
"text": "In Irish law, there is no such thing as a \"class action\" per se. Third-party litigation funding is prohibited under Irish law. Instead, there is the 'representative action' (Irish: gníomh ionadaíoch) or 'test case' (cás samplach). A representative action is \"where one claimant or defendant, with the same interest as a group of claimants or defendants in an action, institutes or defends proceedings on behalf of that group of claimants or defendants.\"",
"title": "Class action legislation"
},
{
"paragraph_id": 62,
"text": "Some test cases in Ireland have included:",
"title": "Class action legislation"
},
{
"paragraph_id": 63,
"text": "Italy has class action legislation. Consumer associations can file claims on behalf of groups of consumers to obtain judicial orders against corporations that cause injury or damage to consumers. These types of claims are increasing, and Italian courts have allowed them against banks that continue to apply compound interest on retail clients' current account overdrafts. The introduction of class actions is on the government's agenda. On November 19, 2007, the Senato della Repubblica passed a class-action law in Finanziaria 2008, a financial document for the economy management of the government. From 10 December 2007, in order of Italian legislation system, the law is before the House and has to be passed also by the Camera dei Deputati, the second house of Italian Parliament, to become an effective law. In 2004, the Italian parliament considered the introduction of a type of class action, specifically in the area of consumer law. No such law has been enacted, but scholars demonstrated that class actions (azioni rappresentative) do not contrast with Italian principles of civil procedure. Class action is regulated by art. 140 bis of the Italian consumers' code and has been in force since 1 July 2009. On May 19, 2021, the reform of the Italian legal framework on class actions finally entered into force. The new rules, designed by Law n. 31 and published on April 18, 2019, (Law n. 31/2019), were initially intended to become effective on April 19, 2020, but had been delayed twice. The new rules on class actions are now included in the Italian Civil Procedure Code (ICPC). Overall, the new class action appears to be a viable instrument which, through a system of economic incentives, could overcome the rational apathy of small-claims holders and ensure redress.",
"title": "Class action legislation"
},
{
"paragraph_id": 64,
"text": "Dutch law allows associations (verenigingen) and foundations (stichtingen) to bring a so-called collective action on behalf of other persons, provided they can represent the interests of such persons according to their by-laws (statuten) (section 3:305a Dutch Civil Code). All types of actions are permitted. This includes a claim for monetary damages, provided the event occurred after 15 November 2016 (pursuant to new legislation which entered into force 1 January 2020). Most class actions over the past decade have been in the field of securities fraud and financial services. The acting association or foundation may come to a collective settlement with the defendant. The settlement may also include – and usually primarily consists of – monetary compensation of damages. Such settlement can be declared binding for all injured parties by the Amsterdam Court of Appeal (section 7:907 Dutch Civil Code). The injured parties have an opt-out right during the opt-out period set by the Court, usually 3 to 6 months. Settlements involving injured parties from outside The Netherlands can also be declared binding by the Court. Since US courts are reluctant to take up class actions brought on behalf of injured parties not residing in the US who have suffered damages due to acts or omissions committed outside the US, combinations of US class actions and Dutch collective actions may come to a settlement that covers plaintiffs worldwide. An example of this is the Royal Dutch Shell Oil Reserves Settlement that was declared binding upon both US and non-US plaintiffs.",
"title": "Class action legislation"
},
{
"paragraph_id": 65,
"text": "\"Pozew zbiorowy\" or class action has been allowed under Polish law since July 19, 2010. A minimum of 10 persons, suing based on the same law, is required.",
"title": "Class action legislation"
},
{
"paragraph_id": 66,
"text": "Collective litigation has been allowed under Russian law since 2002. Basic criteria are, like in the US, numerosity, commonality, and typicality.",
"title": "Class action legislation"
},
{
"paragraph_id": 67,
"text": "Spanish law allows nominated consumer associations to take action to protect the interests of consumers. A number of groups already have the power to bring collective or class actions: certain consumer associations, bodies legally constituted to defend the \"collective interest\" and groups of injured parties.",
"title": "Class action legislation"
},
{
"paragraph_id": 68,
"text": "Recent changes to Spanish civil procedure rules include the introduction of a quasi-class action right for certain consumer associations to claim damages on behalf of unidentified classes of consumers. The rules require consumer associations to represent an adequate number of affected parties who have suffered the same harm. Also, any judgment made by the Spanish court will list the individual beneficiaries or, if that is not possible, conditions that need to be fulfilled for a party to benefit from a judgment.",
"title": "Class action legislation"
},
{
"paragraph_id": 69,
"text": "Swiss law does not allow for any form of class action. When the government proposed a new federal code of civil procedure in 2006, replacing the cantonal codes of civil procedure, it rejected the introduction of class actions, arguing that",
"title": "Class action legislation"
},
{
"paragraph_id": 70,
"text": "[It] is alien to European legal thought to allow somebody to exercise rights on the behalf of a large number of people if these do not participate as parties in the action. ... Moreover, the class action is controversial even in its country of origin, the U.S., because it can result in significant procedural problems. ... Finally, the class action can be openly or discretely abused. The sums sued for are usually enormous, so that the respondent can be forced to concede, if they do not want to face sudden huge indebtness and insolvency (so-called legal blackmail).",
"title": "Class action legislation"
},
{
"paragraph_id": 71,
"text": "The Civil Procedure Rules of the courts of England and Wales came into force in 1999 and have provided for representative actions in limited circumstances (under Part 19.6). These have not been much used, with only two reported cases at the court of first instance in the first ten years after the Civil Procedure Rules took effect. However, a sectoral mechanism was adopted by the Consumer Rights Act 2015, taking effect on October 1, 2015. Under the provisions therein, opt-in or opt-out collective procedures may be certified for breaches of competition law. This is currently the closest mechanism to a class action in England and Wales.",
"title": "Class action legislation"
},
{
"paragraph_id": 72,
"text": "In the United States, the class representative, also called a lead plaintiff, named plaintiff, or representative plaintiff is the named party in a class-action lawsuit. Although the class representative is named as a party to the litigation, the court must approve the class representative when it certifies the lawsuit as a class action.",
"title": "Class action legislation"
},
{
"paragraph_id": 73,
"text": "The class representative must be able to represent the interests of all the members of the class, by being typical of the class members and not having conflicts with them. He or she is responsible for hiring the attorney, filing the lawsuit, consulting on the case, and agreeing to any settlement. In exchange, the class representative may be entitled to compensation (at the court's discretion) out of the recovery amount.",
"title": "Class action legislation"
},
{
"paragraph_id": 74,
"text": "In securities class actions that allege violations of Section 11 of the Securities Act of 1933, \"officers and directors are liable together with the corporation for material misrepresentations in the registration statement.\"",
"title": "Class action legislation"
},
{
"paragraph_id": 75,
"text": "To have \"standing\" to sue under Section 11 of the 1933 Act in a class action, a plaintiff must be able to prove that he can \"trace\" his shares to the registration statement in question, as to which there is alleged a material misstatement or omission. In the absence of an ability to actually trace his shares, such as when securities issued at multiple times are held by the Depository Trust Company in a fungible bulk and physical tracing of particular shares may be impossible, the plaintiff may be barred from pursuing his claim for lack of standing.",
"title": "Class action legislation"
},
{
"paragraph_id": 76,
"text": "In federal courts, class actions are governed by Federal Rules of Civil Procedure Rule 23 and 28 U.S.C.A. § 1332(d). Cases in federal courts are only allowed to proceed as class actions if the court has jurisdiction to hear the case, and if the case meets the criteria set out in Rule 23. In the vast majority of federal class actions, the class is acting as the plaintiff. However, Rule 23 also provides for defendant class actions.",
"title": "Class action legislation"
},
{
"paragraph_id": 77,
"text": "Typically, federal courts are thought to be more favorable for defendants, and state courts more favorable for plaintiffs. Many class actions are filed initially in state court. The defendant will frequently try to remove the case to federal court. The Class Action Fairness Act of 2005 increases defendants' ability to remove state cases to federal court by giving federal courts original jurisdiction for all class actions with damages exceeding $5,000,000 exclusive of interest and costs. The Class Action Fairness Act contains carve-outs for, among other things, shareholder class actions covered by the Private Securities Litigation Reform Act of 1995 and those concerning internal corporate governance issues (the latter typically being brought as shareholder derivative actions in the state courts of Delaware, the state of incorporation of most large corporations).",
"title": "Class action legislation"
},
{
"paragraph_id": 78,
"text": "Class actions may be brought in federal court if the claim arises under federal law or if the claim falls under 28 U.S.C. § 1332(d). Under § 1332(d)(2) the federal district courts have original jurisdiction over any civil action where the amount in controversy exceeds $5,000,000 and",
"title": "Class action legislation"
},
{
"paragraph_id": 79,
"text": "Nationwide plaintiff classes are possible, but such suits must have a commonality of issues across state lines. This may be difficult if the civil law in the various states lack significant commonalities. Large class actions brought in federal court frequently are consolidated for pre-trial purposes through the device of multidistrict litigation (MDL). It is also possible to bring class actions under state law, and in some cases the court may extend its jurisdiction to all the members of the class, including out of state (or even internationally) as the key element is the jurisdiction that the court has over the defendant.",
"title": "Class action legislation"
},
{
"paragraph_id": 80,
"text": "For the case to proceed as a class action and bind absent class members, the court must certify the class under Rule 23 on a motion from the party wishing to proceed on a class basis. For a class to be certified, the moving party must meet all of the criteria listed under Rule 23(a), and at least one of the criteria listed under Rule 23(b).",
"title": "Class action legislation"
},
{
"paragraph_id": 81,
"text": "The 23(a) criteria are referred to as numerosity, commonality, typicality, and adequacy. Numerosity refers to the number of people in the class. To be certified, the class has to have enough members that simply adding each of them as a named party to the lawsuit would be impractical. There is no bright-line rule to determine numerosity, but classes with hundreds of members are generally deemed to be sufficiently numerous. To satisfy commonality, there must be a common question of law and fact such that \"determination of its truth or falsity will resolve an issue that is central to the validity of each one of the claims in one stroke\". The typicality requirement ensures that the claims or defenses of the named plaintiff are typical of those of everyone else in the class. Finally, adequacy requirement states that the named plaintiff must fairly and adequately represent the interests of the absent class members.",
"title": "Class action legislation"
},
{
"paragraph_id": 82,
"text": "Rule 23(b)(3) allows class certification if \"questions of law or fact common to class members predominate over any questions affecting only individual members, and that a class action is superior to other available methods for fairly and efficiently adjudicating the controversy.\"",
"title": "Class action legislation"
},
{
"paragraph_id": 83,
"text": "Due process requires in most cases that notice describing the class action be sent, published, or broadcast to class members. As part of this notice procedure, there may have to be several notices, first a notice allowing class members to opt out of the class, i.e. if individuals wish to proceed with their own litigation they are entitled to do so, only to the extent that they give timely notice to the class counsel or the court that they are opting out. Second, if there is a settlement proposal, the court will usually direct the class counsel to send a settlement notice to all the members of the certified class, informing them of the details of the proposed settlement.",
"title": "Class action legislation"
},
{
"paragraph_id": 84,
"text": "Since 1938, many states have adopted rules similar to the FRCP. However, some states, like California, have civil procedure systems, which deviate significantly from the federal rules; the California Codes provide for four separate types of class actions. As a result, there are two separate treatises devoted solely to the complex topic of California class actions. Some states, such as Virginia, do not provide for any class actions, while others, such as New York, limit the types of claims that may be brought as class actions.",
"title": "Class action legislation"
},
{
"paragraph_id": 85,
"text": "John Grisham's 2003 novel The King of Torts is a fable of the rights and wrongs of class actions.",
"title": "In fiction"
}
] | A class action, also known as a class-action lawsuit, class suit, or representative action, is a type of lawsuit where one of the parties is a group of people who are represented collectively by a member or members of that group. The class action originated in the United States and is still predominantly an American phenomenon, but Canada, as well as several European countries with civil law, have made changes in recent years to allow consumer organizations to bring claims on behalf of consumers. | 2001-11-21T21:06:33Z | 2023-12-12T04:22:55Z | [
"Template:Webarchive",
"Template:Cite court",
"Template:Short description",
"Template:About",
"Template:Citation needed",
"Template:Anchor",
"Template:Cite web",
"Template:Cite book",
"Template:Citation",
"Template:Tort law",
"Template:See also",
"Template:Lang-ga",
"Template:Blockquote",
"Template:Reflist",
"Template:Rp",
"Template:Main",
"Template:Full citation needed",
"Template:Cite news",
"Template:Cite journal",
"Template:Authority control",
"Template:TOC limit",
"Template:Civil procedure (United States)",
"Template:Spaced ndash",
"Template:INRConvert",
"Template:Frcp"
] | https://en.wikipedia.org/wiki/Class_action |
7,201 | Contempt of court | Contempt of court, often referred to simply as "contempt", is the crime of being disobedient to or disrespectful toward a court of law and its officers in the form of behavior that opposes or defies the authority, justice, and dignity of the court. A similar attitude toward a legislative body is termed contempt of Parliament or contempt of Congress. The verb for "to commit contempt" is contemn (as in "to contemn a court order") and a person guilty of this is a contemnor or contemner.
There are broadly two categories of contempt: being disrespectful to legal authorities in the courtroom, or willfully failing to obey a court order. Contempt proceedings are especially used to enforce equitable remedies, such as injunctions. In some jurisdictions, the refusal to respond to subpoena, to testify, to fulfill the obligations of a juror, or to provide certain information can constitute contempt of the court.
When a court decides that an action constitutes contempt of court, it can issue an order in the context of a court trial or hearing that declares a person or organization to have disobeyed or been disrespectful of the court's authority, called "found" or "held" in contempt. That is the judge's strongest power to impose sanctions for acts that disrupt the court's normal process.
A finding of being in contempt of court may result from a failure to obey a lawful order of a court, showing disrespect for the judge, disruption of the proceedings through poor behavior, or publication of material or non-disclosure of material, which in doing so is deemed likely to jeopardize a fair trial. A judge may impose sanctions such as a fine, jail or social service for someone found guilty of contempt of court, which makes contempt of court a process crime. Judges in common law systems usually have more extensive power to declare someone in contempt than judges in civil law systems.
Contempt of court is essentially seen as a form of disturbance that may impede the functioning of the court. The judge may impose fines and/or jail time upon any person committing contempt of court. The person is usually let out upon his or her agreement to fulfill the wishes of the court. Civil contempt can involve acts of omission. The judge will make use of warnings in most situations that may lead to a person being charged with contempt if the warnings are ignored. It is relatively rare that a person is charged for contempt without first receiving at least one warning from the judge. Constructive contempt, also called consequential contempt, is when a person fails to fulfill the will of the court as it applies to outside obligations of the person. In most cases, constructive contempt is considered to be in the realm of civil contempt due to its passive nature.
Indirect contempt is something that is associated with civil and constructive contempt and involves a failure to follow court orders. Criminal contempt includes anything that could be considered a disturbance, such as repeatedly talking out of turn, bringing forth previously banned evidence, or harassment of any other party in the courtroom, including committing an assault against the defendant in a criminal case. There have been instances during murder trials that grieving family members of murder victims have attacked the defendants in courtrooms in plain view of judges, bailiffs, and jurors, leading to said family members to be charged with contempt. Direct contempt is an unacceptable act in the presence of the judge (in facie curiae), and generally begins with a warning; it may be accompanied by the immediate imposition of a punishment.
In Australia, a judge may impose a fine or jail for contempt of court.
A Belgian correctional or civil judge may immediately try the person for insulting the court.
In Canada, contempt of court is an exception to the general principle that all criminal offences are set out in the federal Criminal Code. Contempt of court and contempt of Parliament are the only remaining common law offences in Canada.
Contempt of court includes the following behaviors:
This section applies only to the Federal Court of Appeal and Federal Court.
Under Federal Court Rules, Rules 466, and Rule 467 a person who is accused of Contempt needs to be first served with a contempt order and then appear in court to answer the charges. Convictions can only be made when proof beyond a reasonable doubt is achieved.
If it is a matter of urgency or the contempt was done in front of a judge, that person can be punished immediately. Punishment can range from the person being imprisoned for a period of less than five years or until the person complies with the order or fine.
Under Tax Court of Canada Rules of Tax Court of Canada Act, a person who is found to be in contempt may be imprisoned for a period of less than two years or fined. Similar procedures for serving an order first is also used at the Tax Court.
Different procedures exist for different provincial courts. For example, in British Columbia, a justice of the peace can only issue a summons to an offender for contempt, which will be dealt with by a judge, even if the offence was done in the face of the justice.
Judges from the Court of Final Appeal, High Court, District Court along with members from the various tribunals and Coroner's Court all have the power to impose immediate punishments for contempt in the face of the court, derived from legislation or through common law:
The use of insulting or threatening language in the magistrates' courts or against a magistrate is in breach of section 99 of the Magistrates Ordinance (Cap 227) which states the magistrate can 'summarily sentence the offender to a fine at level 3 and to imprisonment for 6 months.'
In addition, certain appeal boards are given the statutory authority for contempt by them (e.g., Residential Care Home, Hotel and Guesthouse Accommodation, Air Pollution Control, etc.). For contempt in front of these boards, the chairperson will certify the act of contempt to the Court of First Instance who will then proceed with a hearing and determine the punishment.
In England and Wales (a common law jurisdiction), the law on contempt is partly set out in case law (common law), and partly codified by the Contempt of Court Act 1981. Contempt may be classified as criminal or civil. The maximum penalty for criminal contempt under the 1981 Act is committal to prison for two years.
Disorderly, contemptuous or insolent behaviour toward the judge or magistrates while holding the court, tending to interrupt the due course of a trial or other judicial proceeding, may be prosecuted as "direct" contempt. The term "direct" means that the court itself cites the person in contempt by describing the behaviour observed on the record. Direct contempt is distinctly different from indirect contempt, wherein another individual may file papers alleging contempt against a person who has willfully violated a lawful court order.
There are limits to the powers of contempt created by rulings of European Court of Human Rights. Reporting on contempt of court, the Law Commission commented that "punishment of an advocate for what he or she says in court, whether a criticism of the judge or a prosecutor, amounts to an interference with his or her rights under article 10 of the ECHR" and that such limits must be "prescribed by law" and be "necessary in a democratic society", citing Nikula v Finland.
The Crown Court is a superior court according to the Senior Courts Act 1981, and Crown Courts have the power to punish contempt. The Divisional Court as part of the High Court has ruled that this power can apply in these three circumstances:
Where it is necessary to act quickly, a judge may act to impose committal (to prison) for contempt.
Where it is not necessary to be so urgent, or where indirect contempt has taken place the Attorney General can intervene and the Crown Prosecution Service will institute criminal proceedings on his behalf before a Divisional Court of the King's Bench Division of the High Court of Justice of England and Wales. In January 2012, for example, a juror who had researched information on the internet was jailed for contempt of court. Theodora Dallas, initially searching for the meaning of the term "grievous bodily harm", added search criteria which localised her search and brought to light another charge against the defendant. Because she then shared this information with the other jurors, the judge stated that she had compromised the defendant's right to a fair trial and the prosecution was abandoned.
Magistrates' courts also have powers under the 1981 Act to order to detain any person who "insults the court" or otherwise disrupts its proceedings until the end of the sitting. Upon contempt being admitted or proved the (invariably) District Judge (sitting as a magistrate) may order committal to prison for a maximum of one month, impose a fine of up to £2,500, or both.
It will be contempt to bring an audio recording device or picture-taking device of any sort into an English court without the consent of the court.
It will not be contempt according to section 10 of the Act for a journalist to refuse to disclose his sources, unless the court has considered the evidence available and determined that the information is "necessary in the interests of justice or national security or for the prevention of disorder or crime".
Under the Contempt of Court Act it is criminal contempt to publish anything which creates a real risk that the course of justice in proceedings may be seriously impaired. It only applies where proceedings are active, and the Attorney General has issued guidance as to when he believes this to be the case, and there is also statutory guidance. The clause prevents the newspapers and media from publishing material that is too extreme or sensationalist about a criminal case until the trial or linked trials are over and the juries have given their verdicts.
Section 2 of the Act defines and limits the previous common law definition of contempt (which was previously based upon a presumption that any conduct could be treated as contempt, regardless of intent), to only instances where there can be proved an intent to cause a substantial risk of serious prejudice to the administration of justice (i.e./e.g., the conduct of a trial).
In civil proceedings there are two main ways in which contempt is committed:
In India, contempt of court is of two types:
In United States jurisprudence, acts of contempt are generally divided into direct or indirect, and civil or criminal. Direct contempt occurs in the presence of a judge; civil contempt is "coercive and remedial" as opposed to punitive. In the United States, relevant statutes include 18 U.S.C. §§ 401–403 and Federal Rule of Criminal Procedure 42.
Contempt of court in a civil suit is generally not considered to be a criminal offense, with the party benefiting from the order also holding responsibility for the enforcement of the order. However, some cases of civil contempt have been perceived as intending to harm the reputation of the plaintiff, or to a lesser degree, the judge or the court.
Sanctions for contempt may be criminal or civil. If a person is to be punished criminally, then the contempt must be proven beyond a reasonable doubt, but once the charge is proven, then punishment (such as a fine or, in more serious cases, imprisonment) is imposed unconditionally. The civil sanction for contempt (which is typically incarceration in the custody of the sheriff or similar court officer) is limited in its imposition for so long as the disobedience to the court's order continues: once the party complies with the court's order, the sanction is lifted. The imposed party is said to "hold the keys" to his or her own cell, thus conventional due process is not required. In federal and most state courts, the burden of proof for civil contempt is clear and convincing evidence, a lower standard than in criminal cases.
In civil contempt cases there is no principle of proportionality. In Chadwick v. Janecka (3d Cir. 2002), a U.S. court of appeals held that H. Beatty Chadwick could be held indefinitely for his failure to produce $2.5 million as a state court ordered in a civil trial. Chadwick had been imprisoned for nine years at that time and continued to be held in prison until 2009, when a state court set him free after 14 years, making his imprisonment the longest on a contempt charge to date.
Civil contempt is only appropriate when the imposed party has the power to comply with the underlying order. Controversial contempt rulings have periodically arisen from cases involving asset protection trusts, where the court has ordered a settlor of an asset protection trust to repatriate assets so that the assets may be made available to a creditor. A court cannot maintain an order of contempt where the imposed party does not have the ability to comply with the underlying order. This claim when made by the imposed party is known as the "impossibility defense".
Contempt of court is considered a prerogative of the court, and "the requirement of a jury does not apply to 'contempts committed in disobedience of any lawful writ, process, order, rule, decree, or command entered in any suit or action brought or prosecuted in the name of, or on behalf of, the United States.'" This stance is not universally agreed with by other areas of the legal world, and there have been many calls to have contempt cases to be tried by jury, rather than by judge, as a potential conflict of interest rising from a judge both accusing and sentencing the defendant. At least one Supreme Court justice has made calls for jury trials to replace judge trials on contempt cases.
The United States Marshals Service is the agency component that first holds all federal prisoners. It uses the Prisoner Population Management System /Prisoner Tracking System. The only types of records that are disclosed as being in the system are those of "federal prisoners who are in custody pending criminal proceedings." The records of "alleged civil contempors" are not listed in the Federal Register as being in the system leading to a potential claim for damages under The Privacy Act, 5 U.S.C. § 552a(e)(4)(I).
In the United States, because of the broad protections granted by the First Amendment, with extremely limited exceptions, unless the media outlet is a party to the case, a media outlet cannot be found in contempt of court for reporting about a case because a court cannot order the media in general not to report on a case or forbid it from reporting facts discovered publicly. Newspapers cannot be closed because of their content.
There have been criticisms over the practice of trying contempt from the bench. In particular, Supreme Court Justice Hugo Black wrote in a dissent, "It is high time, in my judgment, to wipe out root and branch the judge-invented and judge-maintained notion that judges can try criminal contempt cases without a jury." | [
{
"paragraph_id": 0,
"text": "Contempt of court, often referred to simply as \"contempt\", is the crime of being disobedient to or disrespectful toward a court of law and its officers in the form of behavior that opposes or defies the authority, justice, and dignity of the court. A similar attitude toward a legislative body is termed contempt of Parliament or contempt of Congress. The verb for \"to commit contempt\" is contemn (as in \"to contemn a court order\") and a person guilty of this is a contemnor or contemner.",
"title": ""
},
{
"paragraph_id": 1,
"text": "There are broadly two categories of contempt: being disrespectful to legal authorities in the courtroom, or willfully failing to obey a court order. Contempt proceedings are especially used to enforce equitable remedies, such as injunctions. In some jurisdictions, the refusal to respond to subpoena, to testify, to fulfill the obligations of a juror, or to provide certain information can constitute contempt of the court.",
"title": ""
},
{
"paragraph_id": 2,
"text": "When a court decides that an action constitutes contempt of court, it can issue an order in the context of a court trial or hearing that declares a person or organization to have disobeyed or been disrespectful of the court's authority, called \"found\" or \"held\" in contempt. That is the judge's strongest power to impose sanctions for acts that disrupt the court's normal process.",
"title": ""
},
{
"paragraph_id": 3,
"text": "A finding of being in contempt of court may result from a failure to obey a lawful order of a court, showing disrespect for the judge, disruption of the proceedings through poor behavior, or publication of material or non-disclosure of material, which in doing so is deemed likely to jeopardize a fair trial. A judge may impose sanctions such as a fine, jail or social service for someone found guilty of contempt of court, which makes contempt of court a process crime. Judges in common law systems usually have more extensive power to declare someone in contempt than judges in civil law systems.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Contempt of court is essentially seen as a form of disturbance that may impede the functioning of the court. The judge may impose fines and/or jail time upon any person committing contempt of court. The person is usually let out upon his or her agreement to fulfill the wishes of the court. Civil contempt can involve acts of omission. The judge will make use of warnings in most situations that may lead to a person being charged with contempt if the warnings are ignored. It is relatively rare that a person is charged for contempt without first receiving at least one warning from the judge. Constructive contempt, also called consequential contempt, is when a person fails to fulfill the will of the court as it applies to outside obligations of the person. In most cases, constructive contempt is considered to be in the realm of civil contempt due to its passive nature.",
"title": "In use today"
},
{
"paragraph_id": 5,
"text": "Indirect contempt is something that is associated with civil and constructive contempt and involves a failure to follow court orders. Criminal contempt includes anything that could be considered a disturbance, such as repeatedly talking out of turn, bringing forth previously banned evidence, or harassment of any other party in the courtroom, including committing an assault against the defendant in a criminal case. There have been instances during murder trials that grieving family members of murder victims have attacked the defendants in courtrooms in plain view of judges, bailiffs, and jurors, leading to said family members to be charged with contempt. Direct contempt is an unacceptable act in the presence of the judge (in facie curiae), and generally begins with a warning; it may be accompanied by the immediate imposition of a punishment.",
"title": "In use today"
},
{
"paragraph_id": 6,
"text": "In Australia, a judge may impose a fine or jail for contempt of court.",
"title": "In use today"
},
{
"paragraph_id": 7,
"text": "A Belgian correctional or civil judge may immediately try the person for insulting the court.",
"title": "In use today"
},
{
"paragraph_id": 8,
"text": "In Canada, contempt of court is an exception to the general principle that all criminal offences are set out in the federal Criminal Code. Contempt of court and contempt of Parliament are the only remaining common law offences in Canada.",
"title": "In use today"
},
{
"paragraph_id": 9,
"text": "Contempt of court includes the following behaviors:",
"title": "In use today"
},
{
"paragraph_id": 10,
"text": "This section applies only to the Federal Court of Appeal and Federal Court.",
"title": "In use today"
},
{
"paragraph_id": 11,
"text": "Under Federal Court Rules, Rules 466, and Rule 467 a person who is accused of Contempt needs to be first served with a contempt order and then appear in court to answer the charges. Convictions can only be made when proof beyond a reasonable doubt is achieved.",
"title": "In use today"
},
{
"paragraph_id": 12,
"text": "If it is a matter of urgency or the contempt was done in front of a judge, that person can be punished immediately. Punishment can range from the person being imprisoned for a period of less than five years or until the person complies with the order or fine.",
"title": "In use today"
},
{
"paragraph_id": 13,
"text": "Under Tax Court of Canada Rules of Tax Court of Canada Act, a person who is found to be in contempt may be imprisoned for a period of less than two years or fined. Similar procedures for serving an order first is also used at the Tax Court.",
"title": "In use today"
},
{
"paragraph_id": 14,
"text": "Different procedures exist for different provincial courts. For example, in British Columbia, a justice of the peace can only issue a summons to an offender for contempt, which will be dealt with by a judge, even if the offence was done in the face of the justice.",
"title": "In use today"
},
{
"paragraph_id": 15,
"text": "Judges from the Court of Final Appeal, High Court, District Court along with members from the various tribunals and Coroner's Court all have the power to impose immediate punishments for contempt in the face of the court, derived from legislation or through common law:",
"title": "In use today"
},
{
"paragraph_id": 16,
"text": "The use of insulting or threatening language in the magistrates' courts or against a magistrate is in breach of section 99 of the Magistrates Ordinance (Cap 227) which states the magistrate can 'summarily sentence the offender to a fine at level 3 and to imprisonment for 6 months.'",
"title": "In use today"
},
{
"paragraph_id": 17,
"text": "In addition, certain appeal boards are given the statutory authority for contempt by them (e.g., Residential Care Home, Hotel and Guesthouse Accommodation, Air Pollution Control, etc.). For contempt in front of these boards, the chairperson will certify the act of contempt to the Court of First Instance who will then proceed with a hearing and determine the punishment.",
"title": "In use today"
},
{
"paragraph_id": 18,
"text": "In England and Wales (a common law jurisdiction), the law on contempt is partly set out in case law (common law), and partly codified by the Contempt of Court Act 1981. Contempt may be classified as criminal or civil. The maximum penalty for criminal contempt under the 1981 Act is committal to prison for two years.",
"title": "In use today"
},
{
"paragraph_id": 19,
"text": "Disorderly, contemptuous or insolent behaviour toward the judge or magistrates while holding the court, tending to interrupt the due course of a trial or other judicial proceeding, may be prosecuted as \"direct\" contempt. The term \"direct\" means that the court itself cites the person in contempt by describing the behaviour observed on the record. Direct contempt is distinctly different from indirect contempt, wherein another individual may file papers alleging contempt against a person who has willfully violated a lawful court order.",
"title": "In use today"
},
{
"paragraph_id": 20,
"text": "There are limits to the powers of contempt created by rulings of European Court of Human Rights. Reporting on contempt of court, the Law Commission commented that \"punishment of an advocate for what he or she says in court, whether a criticism of the judge or a prosecutor, amounts to an interference with his or her rights under article 10 of the ECHR\" and that such limits must be \"prescribed by law\" and be \"necessary in a democratic society\", citing Nikula v Finland.",
"title": "In use today"
},
{
"paragraph_id": 21,
"text": "The Crown Court is a superior court according to the Senior Courts Act 1981, and Crown Courts have the power to punish contempt. The Divisional Court as part of the High Court has ruled that this power can apply in these three circumstances:",
"title": "In use today"
},
{
"paragraph_id": 22,
"text": "Where it is necessary to act quickly, a judge may act to impose committal (to prison) for contempt.",
"title": "In use today"
},
{
"paragraph_id": 23,
"text": "Where it is not necessary to be so urgent, or where indirect contempt has taken place the Attorney General can intervene and the Crown Prosecution Service will institute criminal proceedings on his behalf before a Divisional Court of the King's Bench Division of the High Court of Justice of England and Wales. In January 2012, for example, a juror who had researched information on the internet was jailed for contempt of court. Theodora Dallas, initially searching for the meaning of the term \"grievous bodily harm\", added search criteria which localised her search and brought to light another charge against the defendant. Because she then shared this information with the other jurors, the judge stated that she had compromised the defendant's right to a fair trial and the prosecution was abandoned.",
"title": "In use today"
},
{
"paragraph_id": 24,
"text": "Magistrates' courts also have powers under the 1981 Act to order to detain any person who \"insults the court\" or otherwise disrupts its proceedings until the end of the sitting. Upon contempt being admitted or proved the (invariably) District Judge (sitting as a magistrate) may order committal to prison for a maximum of one month, impose a fine of up to £2,500, or both.",
"title": "In use today"
},
{
"paragraph_id": 25,
"text": "It will be contempt to bring an audio recording device or picture-taking device of any sort into an English court without the consent of the court.",
"title": "In use today"
},
{
"paragraph_id": 26,
"text": "It will not be contempt according to section 10 of the Act for a journalist to refuse to disclose his sources, unless the court has considered the evidence available and determined that the information is \"necessary in the interests of justice or national security or for the prevention of disorder or crime\".",
"title": "In use today"
},
{
"paragraph_id": 27,
"text": "Under the Contempt of Court Act it is criminal contempt to publish anything which creates a real risk that the course of justice in proceedings may be seriously impaired. It only applies where proceedings are active, and the Attorney General has issued guidance as to when he believes this to be the case, and there is also statutory guidance. The clause prevents the newspapers and media from publishing material that is too extreme or sensationalist about a criminal case until the trial or linked trials are over and the juries have given their verdicts.",
"title": "In use today"
},
{
"paragraph_id": 28,
"text": "Section 2 of the Act defines and limits the previous common law definition of contempt (which was previously based upon a presumption that any conduct could be treated as contempt, regardless of intent), to only instances where there can be proved an intent to cause a substantial risk of serious prejudice to the administration of justice (i.e./e.g., the conduct of a trial).",
"title": "In use today"
},
{
"paragraph_id": 29,
"text": "In civil proceedings there are two main ways in which contempt is committed:",
"title": "In use today"
},
{
"paragraph_id": 30,
"text": "In India, contempt of court is of two types:",
"title": "In use today"
},
{
"paragraph_id": 31,
"text": "In United States jurisprudence, acts of contempt are generally divided into direct or indirect, and civil or criminal. Direct contempt occurs in the presence of a judge; civil contempt is \"coercive and remedial\" as opposed to punitive. In the United States, relevant statutes include 18 U.S.C. §§ 401–403 and Federal Rule of Criminal Procedure 42.",
"title": "In use today"
},
{
"paragraph_id": 32,
"text": "Contempt of court in a civil suit is generally not considered to be a criminal offense, with the party benefiting from the order also holding responsibility for the enforcement of the order. However, some cases of civil contempt have been perceived as intending to harm the reputation of the plaintiff, or to a lesser degree, the judge or the court.",
"title": "In use today"
},
{
"paragraph_id": 33,
"text": "Sanctions for contempt may be criminal or civil. If a person is to be punished criminally, then the contempt must be proven beyond a reasonable doubt, but once the charge is proven, then punishment (such as a fine or, in more serious cases, imprisonment) is imposed unconditionally. The civil sanction for contempt (which is typically incarceration in the custody of the sheriff or similar court officer) is limited in its imposition for so long as the disobedience to the court's order continues: once the party complies with the court's order, the sanction is lifted. The imposed party is said to \"hold the keys\" to his or her own cell, thus conventional due process is not required. In federal and most state courts, the burden of proof for civil contempt is clear and convincing evidence, a lower standard than in criminal cases.",
"title": "In use today"
},
{
"paragraph_id": 34,
"text": "In civil contempt cases there is no principle of proportionality. In Chadwick v. Janecka (3d Cir. 2002), a U.S. court of appeals held that H. Beatty Chadwick could be held indefinitely for his failure to produce $2.5 million as a state court ordered in a civil trial. Chadwick had been imprisoned for nine years at that time and continued to be held in prison until 2009, when a state court set him free after 14 years, making his imprisonment the longest on a contempt charge to date.",
"title": "In use today"
},
{
"paragraph_id": 35,
"text": "Civil contempt is only appropriate when the imposed party has the power to comply with the underlying order. Controversial contempt rulings have periodically arisen from cases involving asset protection trusts, where the court has ordered a settlor of an asset protection trust to repatriate assets so that the assets may be made available to a creditor. A court cannot maintain an order of contempt where the imposed party does not have the ability to comply with the underlying order. This claim when made by the imposed party is known as the \"impossibility defense\".",
"title": "In use today"
},
{
"paragraph_id": 36,
"text": "Contempt of court is considered a prerogative of the court, and \"the requirement of a jury does not apply to 'contempts committed in disobedience of any lawful writ, process, order, rule, decree, or command entered in any suit or action brought or prosecuted in the name of, or on behalf of, the United States.'\" This stance is not universally agreed with by other areas of the legal world, and there have been many calls to have contempt cases to be tried by jury, rather than by judge, as a potential conflict of interest rising from a judge both accusing and sentencing the defendant. At least one Supreme Court justice has made calls for jury trials to replace judge trials on contempt cases.",
"title": "In use today"
},
{
"paragraph_id": 37,
"text": "The United States Marshals Service is the agency component that first holds all federal prisoners. It uses the Prisoner Population Management System /Prisoner Tracking System. The only types of records that are disclosed as being in the system are those of \"federal prisoners who are in custody pending criminal proceedings.\" The records of \"alleged civil contempors\" are not listed in the Federal Register as being in the system leading to a potential claim for damages under The Privacy Act, 5 U.S.C. § 552a(e)(4)(I).",
"title": "In use today"
},
{
"paragraph_id": 38,
"text": "In the United States, because of the broad protections granted by the First Amendment, with extremely limited exceptions, unless the media outlet is a party to the case, a media outlet cannot be found in contempt of court for reporting about a case because a court cannot order the media in general not to report on a case or forbid it from reporting facts discovered publicly. Newspapers cannot be closed because of their content.",
"title": "In use today"
},
{
"paragraph_id": 39,
"text": "There have been criticisms over the practice of trying contempt from the bench. In particular, Supreme Court Justice Hugo Black wrote in a dissent, \"It is high time, in my judgment, to wipe out root and branch the judge-invented and judge-maintained notion that judges can try criminal contempt cases without a jury.\"",
"title": "In use today"
}
] | Contempt of court, often referred to simply as "contempt", is the crime of being disobedient to or disrespectful toward a court of law and its officers in the form of behavior that opposes or defies the authority, justice, and dignity of the court. A similar attitude toward a legislative body is termed contempt of Parliament or contempt of Congress. The verb for "to commit contempt" is contemn and a person guilty of this is a contemnor or contemner. There are broadly two categories of contempt: being disrespectful to legal authorities in the courtroom, or willfully failing to obey a court order. Contempt proceedings are especially used to enforce equitable remedies, such as injunctions. In some jurisdictions, the refusal to respond to subpoena, to testify, to fulfill the obligations of a juror, or to provide certain information can constitute contempt of the court. When a court decides that an action constitutes contempt of court, it can issue an order in the context of a court trial or hearing that declares a person or organization to have disobeyed or been disrespectful of the court's authority, called "found" or "held" in contempt. That is the judge's strongest power to impose sanctions for acts that disrupt the court's normal process. A finding of being in contempt of court may result from a failure to obey a lawful order of a court, showing disrespect for the judge, disruption of the proceedings through poor behavior, or publication of material or non-disclosure of material, which in doing so is deemed likely to jeopardize a fair trial. A judge may impose sanctions such as a fine, jail or social service for someone found guilty of contempt of court, which makes contempt of court a process crime. Judges in common law systems usually have more extensive power to declare someone in contempt than judges in civil law systems. | 2001-11-21T21:19:42Z | 2023-11-27T23:54:28Z | [
"Template:Cite journal",
"Template:Short description",
"Template:Main",
"Template:Usc",
"Template:Cite news",
"Template:Dead link",
"Template:Ussc",
"Template:ISBN",
"Template:Pp-move-indef",
"Template:Citation needed",
"Template:Reflist",
"Template:Citation",
"Template:Wikiquote",
"Template:Authority control",
"Template:More citations needed",
"Template:Ordered list",
"Template:Cite book",
"Template:Wiktionary",
"Template:Cite EB1911",
"Template:Portal",
"Template:Cite web",
"Template:Webarchive"
] | https://en.wikipedia.org/wiki/Contempt_of_court |
7,202 | Corroborating evidence | Corroborating evidence, also referred to as corroboration, is a type of evidence in law.
Corroborating evidence tends to support a proposition that is already supported by some initial evidence, therefore confirming the proposition. For example, W, a witness, testifies that she saw X drive his automobile into a green car. Meanwhile, Y, another witness, testifies that when he examined X's car, later that day, he noticed green paint on its fender. There can also be corroborating evidence related to a certain source, such as what makes an author think a certain way due to the evidence that was supplied by witnesses or objects.
Another type of corroborating evidence comes from using the Baconian method, i.e., the method of agreement, method of difference, and method of concomitant variations.
These methods are followed in experimental design. They were codified by Francis Bacon, and developed further by John Stuart Mill and consist of controlling several variables, in turn, to establish which variables are causally connected. These principles are widely used intuitively in various kinds of proofs, demonstrations, and investigations, in addition to being fundamental to experimental design.
In law, corroboration refers to the requirement in some jurisdictions, such as in Scots law, that any evidence adduced be backed up by at least one other source (see Corroboration in Scots law).
Defendant says, "It was like what he/she (a witness) said but...". This is Corroborative evidence from the defendant that the evidence the witness gave is true and correct.
Corroboration is not needed in certain instances. For example, there are certain statutory exceptions. In the Education (Scotland) Act, it is only necessary to produce a register as proof of lack of attendance. No further evidence is needed.
Perjury
See section 13 of the Perjury Act 1911.
Speeding offences
See section 89(2) of the Road Traffic Regulation Act 1984.
Sexual offences
See section 32 of the Criminal Justice and Public Order Act 1994.
Confessions by mentally handicapped persons
See section 77 of the Police and Criminal Evidence Act 1984.
Evidence of children
See section 34 of the Criminal Justice Act 1988.
Evidence of accomplices
See section 32 of the Criminal Justice and Public Order Act 1994. | [
{
"paragraph_id": 0,
"text": "Corroborating evidence, also referred to as corroboration, is a type of evidence in law.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Corroborating evidence tends to support a proposition that is already supported by some initial evidence, therefore confirming the proposition. For example, W, a witness, testifies that she saw X drive his automobile into a green car. Meanwhile, Y, another witness, testifies that when he examined X's car, later that day, he noticed green paint on its fender. There can also be corroborating evidence related to a certain source, such as what makes an author think a certain way due to the evidence that was supplied by witnesses or objects.",
"title": "Types and uses"
},
{
"paragraph_id": 2,
"text": "Another type of corroborating evidence comes from using the Baconian method, i.e., the method of agreement, method of difference, and method of concomitant variations.",
"title": "Types and uses"
},
{
"paragraph_id": 3,
"text": "These methods are followed in experimental design. They were codified by Francis Bacon, and developed further by John Stuart Mill and consist of controlling several variables, in turn, to establish which variables are causally connected. These principles are widely used intuitively in various kinds of proofs, demonstrations, and investigations, in addition to being fundamental to experimental design.",
"title": "Types and uses"
},
{
"paragraph_id": 4,
"text": "In law, corroboration refers to the requirement in some jurisdictions, such as in Scots law, that any evidence adduced be backed up by at least one other source (see Corroboration in Scots law).",
"title": "Types and uses"
},
{
"paragraph_id": 5,
"text": "Defendant says, \"It was like what he/she (a witness) said but...\". This is Corroborative evidence from the defendant that the evidence the witness gave is true and correct.",
"title": "An example of corroboration"
},
{
"paragraph_id": 6,
"text": "Corroboration is not needed in certain instances. For example, there are certain statutory exceptions. In the Education (Scotland) Act, it is only necessary to produce a register as proof of lack of attendance. No further evidence is needed.",
"title": "An example of corroboration"
},
{
"paragraph_id": 7,
"text": "Perjury",
"title": "England and Wales"
},
{
"paragraph_id": 8,
"text": "See section 13 of the Perjury Act 1911.",
"title": "England and Wales"
},
{
"paragraph_id": 9,
"text": "Speeding offences",
"title": "England and Wales"
},
{
"paragraph_id": 10,
"text": "See section 89(2) of the Road Traffic Regulation Act 1984.",
"title": "England and Wales"
},
{
"paragraph_id": 11,
"text": "Sexual offences",
"title": "England and Wales"
},
{
"paragraph_id": 12,
"text": "See section 32 of the Criminal Justice and Public Order Act 1994.",
"title": "England and Wales"
},
{
"paragraph_id": 13,
"text": "Confessions by mentally handicapped persons",
"title": "England and Wales"
},
{
"paragraph_id": 14,
"text": "See section 77 of the Police and Criminal Evidence Act 1984.",
"title": "England and Wales"
},
{
"paragraph_id": 15,
"text": "Evidence of children",
"title": "England and Wales"
},
{
"paragraph_id": 16,
"text": "See section 34 of the Criminal Justice Act 1988.",
"title": "England and Wales"
},
{
"paragraph_id": 17,
"text": "Evidence of accomplices",
"title": "England and Wales"
},
{
"paragraph_id": 18,
"text": "See section 32 of the Criminal Justice and Public Order Act 1994.",
"title": "England and Wales"
},
{
"paragraph_id": 19,
"text": "",
"title": "References"
}
] | Corroborating evidence, also referred to as corroboration, is a type of evidence in law. | 2001-11-21T21:21:35Z | 2023-10-04T23:03:37Z | [
"Template:Multiple issues",
"Template:Reflist",
"Template:Science-philo-stub",
"Template:Short description",
"Template:Redirect"
] | https://en.wikipedia.org/wiki/Corroborating_evidence |
7,203 | Cross-examination | In law, cross-examination is the interrogation of a witness by one's opponent. It is preceded by direct examination (known as examination-in-chief in Ireland, the United Kingdom, Australia, Canada, South Africa, India and Pakistan) and may be followed by a redirect (known as re-examination in the aforementioned countries). A redirect examination, performed by the attorney or pro se individual who performed the direct examination, clarifies the witness' testimony provided during cross-examination including any subject matter raised during cross-examination but not discussed during direct examination. Recross examination addresses the witness' testimony discussed in redirect by the opponent. Depending on the judge's discretion, opponents are allowed multiple opportunities to redirect and recross examine witnesses (this may vary by jurisdiction).
In the United States federal courts, a cross-examining attorney is generally limited by Rule 611 of the Federal Rules of Evidence to the "subject matter of the direct examination and matters affecting the witness's credibility". The rule also permits the trial court, in its discretion, to "allow inquiry into additional matters as if on direct examination". Many state courts do permit a lawyer to cross-examine a witness on matters not raised during direct examination, though California restricts cross-examination to "any matter within the scope of the direct examination". Similarly, courts in England, South Africa, Australia, and Canada allow a cross-examiner to exceed the scope of direct examination.
Since a witness called by the opposing party is presumed to be hostile, leading questions are allowed on cross-examination. A witness called by a direct examiner, on the other hand, may only be treated as hostile by that examiner after being permitted to do so by the judge, at the request of that examiner and as a result of the witness being openly antagonistic and/or prejudiced against the party that called them.
Cross-examination is a key component of a trial and the topic is given substantial attention during courses on trial advocacy. The opinions of a jury or judge are often changed if cross examination casts doubt on the witness. On the other hand, a credible witness may reinforce the substance of their original statements and enhance the judge's or jury's belief. Though the closing argument is often considered the deciding moment of a trial, effective cross-examination wins trials.
Attorneys anticipate hostile witnesses' responses during pretrial planning, and often attempt to shape the witnesses' perception of the questions to draw out information helpful to the attorney's case. Typically during an attorney's closing argument, they will repeat any admissions made by witnesses that favor their case. In the United States, cross-examination is seen as a core part of the entire adversarial system of justice, in that it "is the principal means by which the believability of a witness and the truth of his testimony are tested." Another key component affecting a trial outcome is jury selection, in which attorneys will attempt to include jurors from whom they feel they can get a favorable response or at the least an unbiased fair decision. So while there are many factors affecting the outcome of a trial, the cross-examination of a witness will often influence an open-minded unbiased jury searching for the certainty of facts upon which to base their decision. | [
{
"paragraph_id": 0,
"text": "In law, cross-examination is the interrogation of a witness by one's opponent. It is preceded by direct examination (known as examination-in-chief in Ireland, the United Kingdom, Australia, Canada, South Africa, India and Pakistan) and may be followed by a redirect (known as re-examination in the aforementioned countries). A redirect examination, performed by the attorney or pro se individual who performed the direct examination, clarifies the witness' testimony provided during cross-examination including any subject matter raised during cross-examination but not discussed during direct examination. Recross examination addresses the witness' testimony discussed in redirect by the opponent. Depending on the judge's discretion, opponents are allowed multiple opportunities to redirect and recross examine witnesses (this may vary by jurisdiction).",
"title": ""
},
{
"paragraph_id": 1,
"text": "In the United States federal courts, a cross-examining attorney is generally limited by Rule 611 of the Federal Rules of Evidence to the \"subject matter of the direct examination and matters affecting the witness's credibility\". The rule also permits the trial court, in its discretion, to \"allow inquiry into additional matters as if on direct examination\". Many state courts do permit a lawyer to cross-examine a witness on matters not raised during direct examination, though California restricts cross-examination to \"any matter within the scope of the direct examination\". Similarly, courts in England, South Africa, Australia, and Canada allow a cross-examiner to exceed the scope of direct examination.",
"title": "Variations by jurisdiction"
},
{
"paragraph_id": 2,
"text": "Since a witness called by the opposing party is presumed to be hostile, leading questions are allowed on cross-examination. A witness called by a direct examiner, on the other hand, may only be treated as hostile by that examiner after being permitted to do so by the judge, at the request of that examiner and as a result of the witness being openly antagonistic and/or prejudiced against the party that called them.",
"title": "Variations by jurisdiction"
},
{
"paragraph_id": 3,
"text": "Cross-examination is a key component of a trial and the topic is given substantial attention during courses on trial advocacy. The opinions of a jury or judge are often changed if cross examination casts doubt on the witness. On the other hand, a credible witness may reinforce the substance of their original statements and enhance the judge's or jury's belief. Though the closing argument is often considered the deciding moment of a trial, effective cross-examination wins trials.",
"title": "Affecting the outcome of jury trials"
},
{
"paragraph_id": 4,
"text": "Attorneys anticipate hostile witnesses' responses during pretrial planning, and often attempt to shape the witnesses' perception of the questions to draw out information helpful to the attorney's case. Typically during an attorney's closing argument, they will repeat any admissions made by witnesses that favor their case. In the United States, cross-examination is seen as a core part of the entire adversarial system of justice, in that it \"is the principal means by which the believability of a witness and the truth of his testimony are tested.\" Another key component affecting a trial outcome is jury selection, in which attorneys will attempt to include jurors from whom they feel they can get a favorable response or at the least an unbiased fair decision. So while there are many factors affecting the outcome of a trial, the cross-examination of a witness will often influence an open-minded unbiased jury searching for the certainty of facts upon which to base their decision.",
"title": "Affecting the outcome of jury trials"
}
] | In law, cross-examination is the interrogation of a witness by one's opponent. It is preceded by direct examination and may be followed by a redirect. A redirect examination, performed by the attorney or pro se individual who performed the direct examination, clarifies the witness' testimony provided during cross-examination including any subject matter raised during cross-examination but not discussed during direct examination. Recross examination addresses the witness' testimony discussed in redirect by the opponent. Depending on the judge's discretion, opponents are allowed multiple opportunities to redirect and recross examine witnesses. | 2001-11-21T21:25:03Z | 2023-11-16T20:29:18Z | [
"Template:About",
"Template:Annotated link",
"Template:Reflist",
"Template:ISBN",
"Template:Ussc",
"Template:Wiktionary",
"Template:Short description",
"Template:Multiple issues",
"Template:Evidence law",
"Template:Webarchive",
"Template:Cite book",
"Template:Authority control"
] | https://en.wikipedia.org/wiki/Cross-examination |
7,206 | Christiania | Christiania may refer to: | [
{
"paragraph_id": 0,
"text": "Christiania may refer to:",
"title": ""
}
] | Christiania may refer to: | 2001-11-21T23:41:15Z | 2023-12-29T02:00:28Z | [
"Template:Wiktionary",
"Template:Look from",
"Template:Disambiguation"
] | https://en.wikipedia.org/wiki/Christiania |
7,207 | Charles d'Abancourt | Charles Xavier Joseph de Franque Ville d'Abancourt (4 July 1758 – 9 September 1792) was a French statesman, minister to Louis XVI.
D'Abancourt was born in Douai, and was the nephew of Charles Alexandre de Calonne. He was Louis XVI's last minister of war (July 1792), and organised the defence of the Tuileries Palace during the 10 August attack. Commanded by the Legislative Assembly to send away the Swiss Guards, he refused, and was arrested for treason to the nation and sent to Orléans to be tried.
At the end of August the Assembly ordered Abancourt and the other prisoners at Orléans to be transferred to Paris with an escort commanded by Claude Fournier, nicknamed l'Americain. At Versailles they learned of the massacres at Paris. Abancourt and his fellow-prisoners were murdered in cold blood during the 9 September massacres (9 September 1792) at Versailles. Fournier was unjustly charged with complicity in the crime. | [
{
"paragraph_id": 0,
"text": "Charles Xavier Joseph de Franque Ville d'Abancourt (4 July 1758 – 9 September 1792) was a French statesman, minister to Louis XVI.",
"title": ""
},
{
"paragraph_id": 1,
"text": "D'Abancourt was born in Douai, and was the nephew of Charles Alexandre de Calonne. He was Louis XVI's last minister of war (July 1792), and organised the defence of the Tuileries Palace during the 10 August attack. Commanded by the Legislative Assembly to send away the Swiss Guards, he refused, and was arrested for treason to the nation and sent to Orléans to be tried.",
"title": "Biography"
},
{
"paragraph_id": 2,
"text": "At the end of August the Assembly ordered Abancourt and the other prisoners at Orléans to be transferred to Paris with an escort commanded by Claude Fournier, nicknamed l'Americain. At Versailles they learned of the massacres at Paris. Abancourt and his fellow-prisoners were murdered in cold blood during the 9 September massacres (9 September 1792) at Versailles. Fournier was unjustly charged with complicity in the crime.",
"title": "Biography"
}
] | Charles Xavier Joseph de Franque Ville d'Abancourt was a French statesman, minister to Louis XVI. | 2001-11-22T04:04:48Z | 2023-09-18T01:06:04Z | [
"Template:S-end",
"Template:Authority control",
"Template:Reflist",
"Template:S-start",
"Template:S-aft",
"Template:Expand German",
"Template:Sfn",
"Template:S-bef",
"Template:S-ttl",
"Template:Use dmy dates",
"Template:Cite book",
"Template:ISBN",
"Template:EB1911",
"Template:S-off",
"Template:Snd",
"Template:Cite web"
] | https://en.wikipedia.org/wiki/Charles_d%27Abancourt |
7,211 | Curtiss P-40 Warhawk | The Curtiss P-40 Warhawk is an American single-engined, single-seat, all-metal fighter-bomber that first flew in 1938. The P-40 design was a modification of the previous Curtiss P-36 Hawk which reduced development time and enabled a rapid entry into production and operational service. The Warhawk was used by most Allied powers during World War II, and remained in frontline service until the end of the war. It was the third most-produced American fighter of World War II, after the P-51 and P-47; by November 1944, when production of the P-40 ceased, 13,738 had been built, all at Curtiss-Wright Corporation's main production facilities in Buffalo, New York.
P-40 Warhawk was the name the United States Army Air Corps gave the plane, and after June 1941, the USAAF adopted the name for all models, making it the official name in the U.S. for all P-40s. The British Commonwealth and Soviet air forces used the name Tomahawk for models equivalent to the original P-40, P-40B, and P-40C, and the name Kittyhawk for models equivalent to the P-40D and all later variants. P-40s first saw combat with the British Commonwealth squadrons of the Desert Air Force in the Middle East and North African campaigns, during June 1941. No. 112 Squadron Royal Air Force, was among the first to operate Tomahawks in North Africa and the unit was the first Allied military aviation unit to feature the "shark mouth" logo, copying similar markings on some Luftwaffe Messerschmitt Bf 110 twin-engine fighters.
The P-40's liquid-cooled, supercharged Allison V-1710 V-12 engine's lack of a two-speed supercharger made it inferior to Luftwaffe fighters such as the Messerschmitt Bf 109 or the Focke-Wulf Fw 190 in high-altitude combat and it was rarely used in operations in Northwest Europe. However, between 1941 and 1944, the P-40 played a critical role with Allied air forces in three major theaters: North Africa, the Southwest Pacific, and China. It also had a significant role in the Middle East, Southeast Asia, Eastern Europe, Alaska and Italy. The P-40's performance at high altitudes was not as important in those theaters, where it served as an air superiority fighter, bomber escort and fighter-bomber.
Although it gained a postwar reputation as a mediocre design, suitable only for close air support, more recent research including scrutiny of the records of Allied squadrons indicates that this was not the case; the P-40 performed surprisingly well as an air superiority fighter, at times suffering severe losses, but also inflicting a very heavy toll on enemy aircraft. Based on war-time victory claims, over 200 Allied fighter pilots – from the UK, Australia, New Zealand, Canada, South Africa, the US and the Soviet Union – became aces flying the P-40. These included at least 20 double aces, mostly over North Africa, China, Burma and India, the South West Pacific and Eastern Europe. The P-40 offered the additional advantages of low cost and durability, which kept it in production as a ground-attack aircraft long after it was obsolescent as a fighter.
On 14 October 1938, Curtiss test pilot Edward Elliott flew the prototype XP-40 on its first flight in Buffalo. The XP-40 was the 10th production Curtiss P-36 Hawk, with its Pratt & Whitney R-1830 Twin Wasp 14-cylinder air-cooled radial engine replaced at the direction of Chief Engineer Don R. Berlin by a liquid-cooled, supercharged Allison V-1710 V-12 engine. The first prototype placed the glycol coolant radiator in an underbelly position on the fighter, just aft of the wing's trailing edge. USAAC Fighter Projects Officer Lieutenant Benjamin S. Kelsey flew this prototype some 300 miles in 57 minutes, approximately 315 miles per hour (507 km/h). Hiding his disappointment, he told reporters that future versions would likely go 100 miles per hour (160 km/h) faster. Kelsey was interested in the Allison engine because it was sturdy and dependable, and it had a smooth, predictable power curve. The V-12 engine offered as much power as a radial engine but had a smaller frontal area and allowed a more streamlined cowl than an aircraft with a radial engine, promising a theoretical 5% increase in top speed.
Curtiss engineers worked to improve the XP-40's speed by moving the radiator forward in steps. Seeing little gain, Kelsey ordered the aircraft to be evaluated in a NACA wind tunnel to identify solutions for better aerodynamic qualities. From 28 March to 11 April 1939, the prototype was studied by NACA. Based on the data obtained, Curtiss moved the glycol coolant radiator forward to the chin; its new air scoop also accommodated the oil cooler air intake. Other improvements to the landing gear doors and the exhaust manifold combined to give performance that was satisfactory to the USAAC. Without beneficial tail winds, Kelsey flew the XP-40 from Wright Field back to Curtiss's plant in Buffalo at an average speed of 354 mph (570 km/h). Further tests in December 1939 proved the fighter could reach 366 mph (589 km/h).
An unusual production feature was a special truck rig to speed delivery at the main Curtiss plant in Buffalo, New York. The rig moved the newly built P-40s in two main components, the main wing and the fuselage, the eight miles from the plant to the airport where the two units were mated for flight and delivery.
The P-40 was conceived as a pursuit aircraft and was agile at low and medium altitudes but suffered from a lack of power at higher altitudes. At medium and high speeds it was one of the tightest-turning early monoplane designs of the war, and it could out turn most opponents it faced in North Africa and the Russian Front. In the Pacific Theater it was out-turned at lower speeds by the lightweight fighters A6M Zero and Nakajima Ki-43 "Oscar". The American Volunteer Group Commander Claire Chennault advised against prolonged dog-fighting with the Japanese fighters due to speed reduction favoring the Japanese.
Allison's V-1710 engines produced 1,040 hp (780 kW) at sea level and 14,000 ft (4,300 m). This was not powerful compared with contemporary fighters, and the early P-40 variants' top speeds were only average. The single-stage, single-speed supercharger meant that the P-40 was a poor high-altitude fighter. Later versions, with 1,200 hp (890 kW) Allisons or more powerful 1,400 hp Packard Merlin engines were more capable. Climb performance was fair to poor, depending on the subtype. Dive acceleration was good and dive speed was excellent. The highest-scoring P-40 ace, Clive Caldwell (RAAF), who claimed 22 of his 28½ kills in the type, said that the P-40 had "almost no vices", although "it was a little difficult to control in terminal velocity". The P-40 had one of the fastest maximum dive speeds of any fighter of the early war period, and good high-speed handling.
The P-40 tolerated harsh conditions and a variety of climates. Its semi-modular design was easy to maintain in the field. It lacked innovations such as boosted ailerons or automatic leading edge slats, but its strong structure included a five-spar wing, which enabled P-40s to pull high-G turns and survive some midair collisions. Intentional ramming attacks against enemy aircraft were occasionally recorded as victories by the Desert Air Force and Soviet Air Forces. Caldwell said P-40s "would take a tremendous amount of punishment, violent aerobatics as well as enemy action". Operational range was good by early war standards and was almost double that of the Supermarine Spitfire or Messerschmitt Bf 109, although inferior to the Mitsubishi A6M Zero, Nakajima Ki-43 and Lockheed P-38 Lightning.
Caldwell found the P-40C Tomahawk's armament of two .50-inch (13 mm) Browning AN/M2 "light-barrel" dorsal nose-mount synchronized machine guns and two .303-inch (7.7 mm) Browning machine guns in each wing to be inadequate. This was improved with the P-40D (Kittyhawk I) which abandoned the synchronized gun mounts and instead had two .50-inch (13 mm) guns in each wing, although Caldwell still preferred the earlier Tomahawk in other respects. The D had armor around the engine and the cockpit, which enabled it to withstand considerable damage. This allowed Allied pilots in Asia and the Pacific to attack Japanese fighters head on, rather than try to out-turn and out-climb their opponents. Late-model P-40s were well armored. Visibility was adequate, although hampered by a complex windscreen frame, and completely blocked to the rear in early models by a raised turtledeck. Poor ground visibility and relatively narrow landing gear track caused many losses on the ground.
Curtiss tested a follow-on design, the Curtiss XP-46, but it offered little improvement over newer P-40 models and was cancelled.
In April 1939, the U.S. Army Air Corps, having witnessed the new, sleek, high-speed, in-line-engined fighters of the European air forces, placed the largest fighter order it had ever made for 524 P-40s.
An early order came from the French Armée de l'Air, which was already operating P-36s. The Armée de l'Air ordered 100 (later the order was increased to 230) as the Hawk 81A-1 but the French were defeated before the aircraft had left the factory and the aircraft were diverted to British and Commonwealth service (as the Tomahawk I), in some cases complete with metric flight instruments.
In late 1942, as French forces in North Africa split from the Vichy government to side with the Allies, U.S. forces transferred P-40Fs from 33rd FG to GC II/5, a squadron that was historically associated with the Lafayette Escadrille. GC II/5 used its P-40Fs and Ls in combat in Tunisia and later for patrol duty off the Mediterranean coast until mid-1944, when they were replaced by Republic P-47D Thunderbolts.
In all, 18 Royal Air Force (RAF) squadrons, four Royal Canadian Air Force (RCAF), three South African Air Force (SAAF) and two Royal Australian Air Force (RAAF) squadrons serving with RAF formations, used P-40s. The first units to convert were Hawker Hurricane squadrons of the Desert Air Force (DAF), in early 1941. The first Tomahawks delivered came without armor, bulletproof windscreens or self-sealing fuel tanks, which were installed in subsequent shipments. Pilots used to British fighters sometimes found it difficult to adapt to the P-40's rear-folding landing gear, which was more prone to collapse than the lateral-folding landing gear of the Hurricane or Supermarine Spitfire. In contrast to the "three-point landing" commonly employed with British types, P-40 pilots were obliged to use a "wheels landing": a longer, low angle approach that touched down on the main wheels first.
Testing showed the aircraft did not have the performance needed for use in Northwest Europe at high-altitude, due to the service ceiling limitation. Spitfires used in the theater operated at heights around 30,000 ft (9,100 m), while the P-40's Allison engine, with its single-stage, low altitude rated supercharger, worked best at 15,000 ft (4,600 m) or lower. When the Tomahawk was used by Allied units based in the UK from February 1941, this limitation relegated the Tomahawk to low-level reconnaissance with RAF Army Cooperation Command and only No. 403 Squadron RCAF was used in the fighter role for a mere 29 sorties, before being replaced by Spitfires. Air Ministry deemed the P-40 unsuitable for the theater. UK P-40 squadrons from mid-1942 re-equipped with aircraft such as Mustangs
The Tomahawk was superseded in North Africa by the more powerful Kittyhawk ("D"-mark onwards) types from early 1942, though some Tomahawks remained in service until 1943. Kittyhawks included many improvements and were the DAF's air superiority fighter for the critical first few months of 1942, until "tropicalised" Spitfires were available. DAF units received nearly 330 Packard V-1650 Merlin-powered P-40Fs, called Kittyhawk IIs, most of which went to the USAAF and the majority of the 700 "lightweight" L models, also powered by the Packard Merlin, in which the armament was reduced to four .50 in (12.7 mm) Brownings (Kittyhawk IIA). The DAF also received some 21 of the later P-40K and the majority of the 600 P-40Ms built; these were known as Kittyhawk IIIs. The "lightweight" P-40Ns (Kittyhawk IV) arrived from early 1943 and were used mostly as fighter-bombers. From July 1942 until mid-1943, elements of the U.S. 57th Fighter Group (57th FG) were attached to DAF P-40 units. The British government also donated 23 P-40s to the Soviet Union.
Tomahawks and Kittyhawks bore the brunt of Luftwaffe and Regia Aeronautica fighter attacks during the North African campaign. The P-40s were considered superior to the Hurricane, which they replaced as the primary fighter of the Desert Air Force.
I would evade being shot at accurately by pulling so much g-force...that you could feel the blood leaving the head and coming down over your eyes... And you would fly like that for as long as you could, knowing that if anyone was trying to get on your tail they were going through the same bleary vision that you had and you might get away... I had deliberately decided that any deficiency the Kittyhawk had was offset by aggression. And I'd done a little bit of boxing – I beat much better opponents simply by going for [them]. And I decided to use that in the air. And it paid off.
The P-40 initially proved quite effective against Axis aircraft and contributed to a slight shift of advantage in the Allies' favor. The gradual replacement of Hurricanes by the Tomahawks and Kittyhawks led to the Luftwaffe accelerating retirement of the Bf 109E and introducing the newer Bf 109F; these were to be flown by the veteran pilots of elite Luftwaffe units, such as Jagdgeschwader 27 (JG27), in North Africa. The P-40 was generally considered roughly equal or slightly superior to the Bf 109 at low altitude but inferior at high altitude, particularly against the Bf 109F. Most air combat in North Africa took place well below 16,000 ft (4,900 m), negating much of the Bf 109's superiority. The P-40 usually had an advantage over the Bf 109 in turning, dive speed and structural strength, was roughly equal in firepower but was slightly inferior in speed and outclassed in rate of climb and operational ceiling.
The P-40 was generally superior to early Italian fighter types, such as the Fiat G.50 Freccia and the Macchi C.200. Its performance against the Macchi C.202 Folgore elicited varying opinions. Some observers consider the Macchi C.202 superior. Caldwell, who scored victories against them in his P-40, felt that the Folgore was superior to the P-40 and the Bf 109 except that its armament of only two or four machine guns was inadequate. Other observers considered the two equally matched or favored the Folgore in aerobatic performance, such as turning radius. The aviation historian Walter J. Boyne wrote that over Africa, the P-40 and the Folgore were "equivalent". Against its lack of high-altitude performance, the P-40 was considered to be a stable gun platform and its rugged construction meant that it was able to operate from rough front line airstrips with a good rate of serviceability.
The earliest victory claims by P-40 pilots include Vichy French aircraft, during the 1941 Syria-Lebanon campaign, against Dewoitine D.520s, a type often considered to be the best French fighter of the war. The P-40 was deadly against Axis bombers in the theater, as well as against the Bf 110 twin-engine fighter. In June 1941, Caldwell, of 250 Squadron in Egypt, flying as flying Officer (F/O) Jack Hamlyn's wingman, recorded in his log book that he was involved in the first air combat victory for the P-40. This was a CANT Z.1007 bomber on 6 June. The claim was not officially recognized, as the crash of the CANT was not witnessed. The first official victory occurred on 8 June, when Hamlyn and Flight Sergeant (Flt Sgt) Tom Paxton destroyed a CANT Z.1007 from 211 Squadriglia of the Regia Aeronautica, over Alexandria. Several days later, the Tomahawk was in action over Syria with No. 3 Squadron RAAF, which claimed 19 aerial victories over Vichy French aircraft during June and July 1941, for the loss of one P-40 (and one lost to ground fire).
Some DAF units initially failed to use the P-40's strengths or used outdated defensive tactics such as the Lufbery circle. The superior climb rate of the Bf 109 enabled fast, swooping attacks, neutralizing the advantages offered by conventional defensive tactics. Various new formations were tried by Tomahawk units from 1941 to 1942, including "fluid pairs" (similar to the German rotte); the Thach Weave (one or two "weavers") at the back of a squadron in formation and whole squadrons bobbing and weaving in loose formations. Werner Schröer, who was credited with destroying 114 Allied aircraft in only 197 combat missions, referred to the latter formation as "bunches of grapes", because he found them so easy to pick off. The leading German expert in North Africa, Hans-Joachim Marseille, claimed as many as 101 P-40s during his career.
From 26 May 1942, Kittyhawk units operated primarily as fighter-bomber units, giving rise to the nickname "Kittybomber". As a result of this change in role and because DAF P-40 squadrons were frequently used in bomber escort and close air support missions, they suffered relatively high losses; many Desert Air Force P-40 pilots were caught flying low and slow by marauding Bf 109s.
Caldwell believed that Operational Training Units did not properly prepare pilots for air combat in the P-40 and as a commander, stressed the importance of training novice pilots properly.
Competent pilots who took advantage of the P-40's strengths were effective against the best of the Luftwaffe and Regia Aeronautica. In August 1941, Caldwell was attacked by two Bf 109s, one of them piloted by German ace Werner Schröer. Although Caldwell was wounded three times and his Tomahawk was hit by more than 100 7.92 mm (0.312 in) bullets and five 20 mm cannon shells, Caldwell shot down Schröer's wingman and returned to base. Some sources also claim that in December 1941, Caldwell killed a prominent German Experte, Erbo von Kageneck (69 kills), while flying a P-40. Caldwell's victories in North Africa included 10 Bf 109s and two Macchi C.202s. Billy Drake of 112 Squadron was the leading British P-40 ace with 13 victories. James "Stocky" Edwards (RCAF), who achieved 12 kills in the P-40 in North Africa, shot down German ace Otto Schulz (51 kills) while flying a Kittyhawk with No. 260 Squadron RAF. Caldwell, Drake, Edwards and Nicky Barr were among at least a dozen pilots who achieved ace status twice over while flying the P-40. A total of 46 British Commonwealth pilots became aces in P-40s, including seven double aces.
The Flying Tigers, known officially as the 1st American Volunteer Group (AVG), were a unit of the Chinese Air Force, recruited from amongst U.S. Navy, Marine Corps and Army aviators and ground crew.
Chennault received crated Model Bs which his airmen assembled in Burma at the end of 1941, adding self-sealing fuel tanks and a second pair of wing guns, such that the aircraft became a hybrid of B and C models. These were not well-liked by their pilots: they lacked drop tanks for extra range, and there were no bomb racks on the wings. Chennault considered the liquid-cooled engine vulnerable in combat because a single bullet through the coolant system would cause the engine to overheat in minutes. The Tomahawks also had no radios, so the AVG improvised by installing a fragile radio transceiver, the RCA-7-H, which had been built for a Piper Cub. Because the plane had a single-stage low-altitude supercharger, its effective ceiling was about 25,000 feet (7,600 m). The most critical problem was the lack of spare parts; the only source was from damaged aircraft. The planes were viewed as cast-offs that no one else wanted, dangerous and difficult to fly. But the pilots did appreciate some of the planes' features. There were two heavy sheets of steel behind the pilot's head and back that offered solid protection, and overall the planes were ruggedly constructed.
Compared to opposing Japanese fighters, the P-40B's strengths were that it was sturdy, well armed, faster in a dive and possessed an excellent rate of roll. While the P-40s could not match the maneuverability of the Japanese Army air arm's Nakajima Ki-27s and Ki-43s, nor the much more famous Zero naval fighter in slow, turning dogfights, at higher speeds the P-40s were more than a match. AVG leader Claire Chennault trained his pilots to use the P-40's particular performance advantages. The P-40 had a higher dive speed than any Japanese fighter aircraft of the early war years, for example, and could exploit so-called "boom-and-zoom" tactics. The AVG was highly successful, and its feats were widely publicized by an active cadre of international journalists to boost sagging public morale at home. According to its official records, in just 6+1⁄2 months, the Flying Tigers destroyed 297 enemy aircraft for the loss of just four of its own in air-to-air combat.
In the spring of 1942, the AVG received a small number of Model E's. Each came equipped with a radio, six .50-caliber machine guns, and auxiliary bomb racks that could hold 35-lb fragmentation bombs. Chennault's armorer added bomb racks for 570-lb Russian bombs, which the Chinese had in abundance. These planes were used in the battle of the Salween River Gorge in late May 1942, which kept the Japanese from entering China from Burma and threatening Kunming. Spare parts, however, remained in short supply. "Scores of new planes...were now in India, and there they stayed—in case the Japanese decided to invade... the AVG was lucky to get a few tires and spark plugs with which to carry on its daily war."
China received 27 P-40E models in early 1943. These were assigned to squadrons of the 4th Air Group.
A total of 15 USAAF pursuit/fighter groups (FG), along with other pursuit/fighter squadrons and a few tactical reconnaissance (TR) units, operated the P-40 during 1941–45. As was also the case with the Bell P-39 Airacobra, many USAAF officers considered the P-40 exceptional but it was gradually replaced by the Lockheed P-38 Lightning, the Republic P-47 Thunderbolt and the North American P-51 Mustang. The bulk of the fighter operations by the USAAF in 1942–43 were borne by the P-40 and the P-39. In the Pacific, these two fighters, along with the U.S. Navy Grumman F4F Wildcat, contributed more than any other U.S. types to breaking Japanese air power during this critical period.
The P-40 was the main USAAF fighter aircraft in the South West Pacific and Pacific Ocean theaters during 1941–42. At Pearl Harbor and in the Philippines, USAAF P-40 squadrons suffered crippling losses on the ground and in the air to Japanese fighters such as the A6M Zero and Ki-43 Hayabusa respectively. During the attack on Pearl Harbor, most of the USAAF fighters were P-40Bs, the majority of which were destroyed. However, a few P-40s managed to get in the air and shoot down several Japanese aircraft, most notably by George Welch and Kenneth Taylor.
In the Dutch East Indies campaign, the 17th Pursuit Squadron (Provisional), formed from USAAF pilots evacuated from the Philippines, claimed 49 Japanese aircraft destroyed, for the loss of 17 P-40s The seaplane tender USS Langley was sunk by Japanese airplanes while delivering P-40s to Tjilatjap, Java. In the Solomon Islands and New Guinea Campaigns and the air defence of Australia, improved tactics and training allowed the USAAF to better use the strengths of the P-40. Due to aircraft fatigue, scarcity of spare parts and replacement problems, the US Fifth Air Force and Royal Australian Air Force created a joint P-40 management and replacement pool on 30 July 1942 and many P-40s went back and forth between the air forces.
The 49th Fighter Group was in action in the Pacific from the beginning of the war. Robert M. DeHaven scored 10 kills (of 14 overall) in the P-40 with the 49th FG. He compared the P-40 favorably with the P-38:
The 8th, 15th, 18th, 24th, 49th, 343rd and 347th PGs/FGs, flew P-40s in the Pacific theaters between 1941 and 1945, with most units converting to P-38s from 1943 to 1944. In 1945, the 71st Reconnaissance Group employed them as armed forward air controllers during ground operations in the Philippines, until it received delivery of P-51s. They claimed 655 aerial victories.
Contrary to conventional wisdom, with sufficient altitude, the P-40 could turn with the A6M and other Japanese fighters, using a combination of a nose-down vertical turn with a bank turn, a technique known as a low yo-yo. Robert DeHaven describes how this tactic was used in the 49th Fighter group:
USAAF and Chinese P-40 pilots performed well in this theater against many Japanese types such as the Ki-43, Nakajima Ki-44 "Tojo" and the Zero. The P-40 remained in use in the China Burma India Theater (CBI) until 1944 and was reportedly preferred over the P-51 Mustang by some US pilots flying in China. The American Volunteer Group (Flying Tigers) was integrated into the USAAF as the 23rd Fighter Group in June 1942. The unit continued to fly newer model P-40s until 1944, achieving a high kill-to-loss ratio. In the Battle of the Salween River Gorge of May 1942 the AVG used the P-40E model equipped with wing racks that could carry six 35-pound fragmentation bombs and Chennault's armorer developed belly racks to carry Russian 570-pound bombs, which the Chinese had in large quantity.
Units arriving in the CBI after the AVG in the 10th and 14th Air Forces continued to perform well with the P-40, claiming 973 kills in the theater, or 64.8 percent of all enemy aircraft shot down. Aviation historian Carl Molesworth stated that "...the P-40 simply dominated the skies over Burma and China. They were able to establish air superiority over free China, northern Burma and the Assam valley of India in 1942, and they never relinquished it." The 3rd, 5th, 51st and 80th FGs, along with the 10th TRS, operated the P-40 in the CBI. CBI P-40 pilots used the aircraft very effectively as a fighter-bomber. The 80th Fighter Group in particular used its so-called B-40 (P-40s carrying 1,000-pound high-explosive bombs) to destroy bridges and kill bridge repair crews, sometimes demolishing their target with one bomb. At least 40 U.S. pilots reached ace status while flying the P-40 in the CBI.
On 14 August 1942, the first confirmed victory by a USAAF unit over a German aircraft in World War II was achieved by a P-40C pilot. 2nd Lt Joseph D. Shaffer, of the 33rd Fighter Squadron, intercepted a Focke-Wulf Fw 200C-3 maritime patrol aircraft that overflew his base at Reykjavík, Iceland. Shaffer damaged the Fw 200, which was finished off by a P-38F. Warhawks were used extensively in the Mediterranean and Middle East theatre of World War II by USAAF units, including the 33rd, 57th, 58th, 79th, 324th and 325th Fighter Groups. While the P-40 suffered heavy losses in the MTO, many USAAF P-40 units achieved high kill-to-loss ratios against Axis aircraft; the 324th FG scored better than a 2:1 ratio in the MTO. In all, 23 U.S. pilots became aces in the MTO on the P-40, most of them during the first half of 1943.
P-40 pilots from the 57th FG were the first USAAF fliers to see action in the MTO, while attached to Desert Air Force Kittyhawk squadrons, from July 1942. The 57th was also the main unit involved in the "Palm Sunday Massacre", on 18 April 1943. Decoded Ultra signals revealed a plan for a large formation of Junkers Ju 52 transports to cross the Mediterranean, escorted by German and Italian fighters. Between 1630 and 1830 hours, all wings of the group were engaged in an intensive effort against the enemy air transports. Of the four Kittyhawk wings, three had left the patrol area before a convoy of a 100+ enemy transports were sighted by 57th FG, which tallied 74 aircraft destroyed. The group was last in the area, and intercepted the Ju 52s escorted by large numbers of Bf 109s, Bf 110s and Macchi C.202s. The group claimed 58 Ju 52s, 14 Bf 109s and two Bf 110s destroyed, with several probables and damaged. Between 20 and 40 of the Axis aircraft landed on the beaches around Cap Bon to avoid being shot down; six Allied fighters were lost, five of them P-40s.
On 22 April, in Operation Flax, a similar force of P-40s attacked a formation of 14 Messerschmitt Me 323 Gigant ("Giant") six-engine transports, covered by seven Bf 109s from II./JG 27. All the transports were shot down, for a loss of three P-40s. The 57th FG was equipped with the Curtiss fighter until early 1944, during which time they were credited with at least 140 air-to-air kills. On 23 February 1943, during Operation Torch, the pilots of the 58th FG flew 75 P-40Ls off the aircraft carrier USS Ranger to the newly captured Vichy French airfield, Cazas, near Casablanca, in French Morocco. The aircraft supplied the 33rd FG and the pilots were reassigned.
The 325th FG (known as the "Checkertail Clan") flew P-40s in the MTO and was credited with at least 133 air-to-air kills from April–October 1943, of which 95 were Bf 109s and 26 were Macchi C.202s, for the loss of 17 P-40s in combat. The 325th FG historian Carol Cathcart wrote:
on 30 July, 20 P-40s of the 317th [Fighter Squadron] ... took off on a fighter sweep ... over Sardinia. As they turned to fly south over the west part of the island, they were attacked near Sassari... The attacking force consisted of 25 to 30 Bf 109s and Macchi C.202s... In the brief, intense battle that occurred ... [the 317th claimed] 21 enemy aircraft.
Cathcart wrote that Lt. Robert Sederberg assisted a comrade being attacked by five Bf 109s, destroyed at least one German aircraft, and may have shot down as many as five. Sederberg was shot down and became a prisoner of war.
A famous African-American unit, the 99th FS, better known as the "Tuskegee Airmen" or "Redtails", flew P-40s in stateside training and for their initial eight months in the MTO. On 9 June 1943, they became the first African-American fighter pilots to engage enemy aircraft, over Pantelleria, Italy. A single Focke-Wulf Fw 190 was reported damaged by Lieutenant Willie Ashley Jr. On 2 July the squadron claimed its first verified kill; a Fw 190 destroyed by Captain Charles Hall. The 99th continued to score with P-40s until February 1944, when they were assigned P-39s and P-51 Mustangs.
The much-lightened P-40L was most heavily used in the MTO, primarily by U.S. pilots. Many US pilots stripped down their P-40s even further to improve performance, often removing two or more of the wing guns from the P-40F/L.
The Kittyhawk was the main fighter used by the RAAF in World War II, in greater numbers than the Spitfire. Two RAAF squadrons serving with the Desert Air Force, No. 3 and No. 450 Squadrons, were the first Australian units to be assigned P-40s. Other RAAF pilots served with RAF or SAAF P-40 squadrons in the theater.
Many RAAF pilots achieved high scores in the P-40. At least five reached "double ace" status: Clive Caldwell, Nicky Barr, John Waddy, Bob Whittle (11 kills each) and Bobby Gibbes (10 kills) in the Middle East, North African and/or New Guinea campaigns. In all, 18 RAAF pilots became aces while flying P-40s.
Nicky Barr, like many Australian pilots, considered the P-40 a reliable mount: "The Kittyhawk became, to me, a friend. It was quite capable of getting you out of trouble more often than not. It was a real warhorse."
At the same time as the heaviest fighting in North Africa, the Pacific War was also in its early stages, and RAAF units in Australia were completely lacking in suitable fighter aircraft. Spitfire production was being absorbed by the war in Europe; P-38s were trialled, but were difficult to obtain; Mustangs had not yet reached squadrons anywhere, and Australia's tiny and inexperienced aircraft industry was geared towards larger aircraft. USAAF P-40s and their pilots originally intended for the U.S. Far East Air Force in the Philippines, but diverted to Australia as a result of Japanese naval activity were the first suitable fighter aircraft to arrive in substantial numbers. By mid-1942, the RAAF was able to obtain some USAAF replacement shipments.
RAAF Kittyhawks played a crucial role in the South West Pacific theater. They fought on the front line as fighters during the critical early years of the Pacific War, and the durability and bomb-carrying abilities (1,000 lb/454 kg) of the P-40 also made it ideal for the ground attack role. During the Battle of Port Moresby RAAF 75 destroyed or damaged some 33 Japanese aircraft of various types. With another 30 probables. General Henry H. Arnold said of No 75 squadron: "Victory in the entire air war against Japan can be traced back to the actions which took place from that dusty strip at Port Moresby in early 1942." For example, 75, and 76 Squadrons played a critical role during the Battle of Milne Bay, fending off Japanese aircraft and providing effective close air support for the Australian infantry, negating the initial Japanese advantage in light tanks and sea power. The Kittyhawks fired "nearly 200,000 rounds of half-inch ammunition" during the course of the battle.
The RAAF units that most used Kittyhawks in the South West Pacific were 75, 76, 77, 78, 80, 82, 84 and 86 Squadrons. These squadrons saw action mostly in the New Guinea and Borneo campaigns.
Late in 1945, RAAF fighter squadrons in the South West Pacific began converting to P-51Ds. However, Kittyhawks were in use with the RAAF until the end of the war, in Borneo. In all, the RAAF acquired 841 Kittyhawks (not counting the British-ordered examples used in North Africa), including 163 P-40E, 42 P-40K, 90 P-40 M and 553 P-40N models. In addition, the RAAF ordered 67 Kittyhawks for use by No. 120 (Netherlands East Indies) Squadron (a joint Australian-Dutch unit in the South West Pacific). The P-40 was retired by the RAAF in 1947.
A total of 13 Royal Canadian Air Force units operated the P-40 in the North West European or Alaskan theaters.
In mid-May 1940, Canadian and US officers watched comparative tests of a XP-40 and a Spitfire, at RCAF Uplands, Ottawa. While the Spitfire was considered to have performed better, it was not available for use in Canada and the P-40 was ordered to meet home air defense requirements. In all, eight Home War Establishment Squadrons were equipped with the Kittyhawk: 72 Kittyhawk I, 12 Kittyhawk Ia, 15 Kittyhawk III and 35 Kittyhawk IV aircraft, for a total of 134 aircraft. These aircraft were mostly diverted from RAF Lend-Lease orders for service in Canada. The P-40 Kittyhawks were obtained in lieu of 144 P-39 Airacobras originally allocated to Canada but reassigned to the RAF.
However, before any home units received the P-40, three RCAF Article XV squadrons operated Tomahawk aircraft from bases in the United Kingdom. No. 403 Squadron RCAF, a fighter unit, used the Tomahawk Mk II briefly before converting to Spitfires. Two Army Co-operation (close air support) squadrons: 400 and 414 Sqns trained with Tomahawks, before converting to Mustang Mk. I aircraft and a fighter/reconnaissance role. Of these, only No. 400 Squadron used Tomahawks operationally, conducting a number of armed sweeps over France in the late 1941. RCAF pilots also flew Tomahawks or Kittyhawks with other British Commonwealth units based in North Africa, the Mediterranean, South East Asia and (in at least one case) the South West Pacific.
In 1942, the Imperial Japanese Navy occupied two islands, Attu and Kiska, in the Aleutians, off Alaska. RCAF home defense P-40 squadrons saw combat over the Aleutians, assisting the USAAF. The RCAF initially sent 111 Squadron, flying the Kittyhawk I, to the US base on Adak island. During the drawn-out campaign, 12 Canadian Kittyhawks operated on a rotational basis from a new, more advanced base on Amchitka,75 mi (121 km) southeast of Kiska. 14 and 111 Sqns took "turn-about" at the base. During a major attack on Japanese positions at Kiska on 25 September 1942, Squadron Leader Ken Boomer shot down a Nakajima A6M2-N ("Rufe") seaplane. The RCAF also purchased 12 P-40Ks directly from the USAAF while in the Aleutians. After the Japanese threat diminished, these two RCAF squadrons returned to Canada and eventually transferred to England without their Kittyhawks.
In January 1943, a further Article XV unit, 430 Squadron was formed at RAF Hartford Bridge, England and trained on obsolete Tomahawk IIA. The squadron converted to the Mustang I before commencing operations in mid-1943.
In early 1945 pilots from No. 133 Squadron RCAF, operating the P-40N out of RCAF Patricia Bay, (Victoria, British Columbia), intercepted and destroyed two Japanese balloon-bombs, which were designed to cause wildfires on the North American mainland. On 21 February, Pilot Officer E. E. Maxwell shot down a balloon, which landed on Sumas Mountain in Washington State. On 10 March, Pilot Officer J. 0. Patten destroyed a balloon near Saltspring Island, British Columbia. The last interception took place on 20 April 1945 when Pilot Officer P.V. Brodeur from 135 Squadron out of Abbotsford, British Columbia shot down a balloon over Vedder Mountain.
The RCAF units that operated P-40s were, in order of conversion:
Some Royal New Zealand Air Force (RNZAF) pilots and New Zealanders in other air forces flew British P-40s while serving with DAF squadrons in North Africa and Italy, including the ace Jerry Westenra.
A total of 301 P-40s were allocated to the RNZAF under Lend-Lease, for use in the Pacific Theater, although four of these were lost in transit. The aircraft equipped 14 Squadron, 15 Squadron, 16 Squadron, 17 Squadron, 18 Squadron, 19 Squadron and 20 Squadron.
RNZAF P-40 squadrons were successful in air combat against the Japanese between 1942 and 1944. Their pilots claimed 100 aerial victories in P-40s, whilst losing 20 aircraft in combat Geoff Fisken, the highest scoring British Commonwealth ace in the Pacific, flew P-40s with 15 Squadron, although half of his victories were claimed with the Brewster Buffalo.
The overwhelming majority of RNZAF P-40 victories were scored against Japanese fighters, mostly Zeroes. Other victories included Aichi D3A "Val" dive bombers. The only confirmed twin engine claim, a Ki-21 "Sally" (misidentified as a G4M "Betty") fell to Fisken in July 1943.
From late 1943 and 1944, RNZAF P-40s were increasingly used against ground targets, including the innovative use of naval depth charges as improvised high-capacity bombs. The last front line RNZAF P-40s were replaced by Vought F4U Corsairs in 1944. The P-40s were relegated to use as advanced pilot trainers.
The remaining RNZAF P-40s, excluding the 20 shot down and 154 written off, were mostly scrapped at Rukuhia in 1948.
The Soviet Voyenno-Vozdushnye Sily (VVS; "Military Air Forces") and Morskaya Aviatsiya (MA; "Naval Air Service") also referred to P-40s as "Tomahawks" and "Kittyhawks". In fact, the Curtiss P-40 Tomahawk / Kittyhawk was the first Allied fighter supplied to the USSR under the Lend-Lease agreement. The USSR received 247 P-40B/Cs (equivalent to the Tomahawk IIA/B in RAF service) and 2,178 P-40E, -K, -L, and -N models between 1941 and 1944. The Tomahawks were shipped from Great Britain and directly from the US, many of them arriving incomplete, lacking machine guns and even the lower half of the engine cowling. In late September 1941, the first 48 P-40s were assembled and checked in the USSR. Test flights showed some manufacturing defects: generator and oil pump gears and generator shafts failed repeatedly, which led to emergency landings. The test report indicated that the Tomahawk was inferior to Soviet "M-105P-powered production fighters in speed and rate of climb. However, it had good short field performance, horizontal maneuverability, range, and endurance." Nevertheless, Tomahawks and Kittyhawks were used against the Germans. The 126th Fighter Aviation Regiment (IAP), fighting on the Western and Kalinin Fronts, were the first unit to receive the P-40. The regiment entered action on 12 October 1941. By 15 November 1941, the regiment had shot down 17 German aircraft. However, Lt (SG) Smirnov noted that the P-40 armament was sufficient for strafing enemy lines but rather ineffective in aerial combat. Another pilot, Stephan Ridny (a Hero of the Soviet Union), remarked that he had to shoot half the ammunition at 50–100 meters (165–340 ft) to shoot down an enemy aircraft.
In January 1942, some 198 aircraft sorties were flown (334 flying hours) and 11 aerial engagements were conducted, in which five Bf 109s, one Ju 88, and one He 111 were downed. These statistics reveal a surprising fact: it turns out that the Tomahawk was fully capable of successful air combat with a Bf 109. The reports of pilots about the circumstances of the engagements confirm this fact. On 18 January 1942, Lieutenants S. V. Levin and I. P. Levsha (in pair) fought an engagement with seven Bf 109s and shot down two of them without loss. On 22 January, a flight of three aircraft led by Lieutenant E. E. Lozov engaged 13 enemy aircraft and shot down two Bf 109Es, again without loss. Altogether, in January, two Tomahawks were lost; one downed by German anti-aircraft artillery and one lost to Messerschmitts.
The Soviets stripped down their P-40s significantly for combat, in many cases removing the wing guns altogether in P-40B/C types, for example. Soviet Air Force reports state that they liked the range and fuel capacity of the P-40, which were superior to most of the Soviet fighters, though they still preferred the P-39. Soviet pilot Nikolai G. Golodnikov recalled: "The cockpit was vast and high. At first it felt unpleasant to sit waist-high in glass, as the edge of the fuselage was almost at waist level. But the bullet-proof glass and armored seat were strong and visibility was good. The radio was also good. It was powerful, reliable, but only on HF (high frequency). The American radios did not have hand microphones but throat microphones. These were good throat mikes: small, light and comfortable." The biggest complaint of some Soviet airmen was its poor climb rate and problems with maintenance, especially with burning out the engines. VVS pilots usually flew the P-40 at War Emergency Power settings while in combat, which brought acceleration and speed performance closer to that of their German rivals, but could burn out engines in a matter of weeks. Tires and batteries also failed. The fluid in the engine's radiators often froze, cracking their cores, which made the Allison engine unsuitable for operations during harsh winter conditions. During the winter of 1941, the 126th Fighter Aviation Regiment suffered from cracked radiators on 38 occasions. Often, entire regiments were reduced to a single flyable aircraft because no replacement parts were available. They also had difficulty with the more demanding requirements for fuel and oil quality of the Allison engines. A fair number of burned-out P-40s were re-engined with Soviet Klimov M-105 engines, but these performed relatively poorly and were relegated to rear area use.
Actually, the P-40 could engage all Messerschmitts on equal terms, almost to the end of 1943. If you take into consideration all the characteristics of the P-40, then the Tomahawk was equal to the Bf 109F and the Kittyhawk was slightly better. Its speed and vertical and horizontal manoeuvre were good and fully competitive with enemy aircraft. Acceleration rate was a bit low, but when you got used to the engine, it was OK. We considered the P-40 a decent fighter plane.
The P-40 saw the most front line use in Soviet hands in 1942 and early 1943. Deliveries over the Alaska-Siberia ALSIB ferry route began in October 1942. It was used in the northern sectors and played a significant role in the defense of Leningrad. The most numerically important types were P-40B/C, P-40E and P-40K/M. By the time the better P-40F and N types became available, production of superior Soviet fighters had increased sufficiently so that the P-40 was replaced in most Soviet Air Force units by the Lavochkin La-5 and various later Yakovlev types. In spring 1943, Lt D.I. Koval of the 45th IAP gained ace status on the North Caucasian front, shooting down six German aircraft flying a P-40. Some Soviet P-40 squadrons had good combat records. Some Soviet pilots became aces on the P-40, though not as many as on the P-39 Airacobra, the most numerous Lend-Lease fighter used by the Soviet Union. However, Soviet commanders thought the Kittyhawk significantly outclassed the Hurricane, although it was "not in the same league as the Yak-1".
The Japanese Army captured some P-40s and later operated a number in Burma. The Japanese appear to have had as many as 10 flyable P-40Es. For a brief period in 1943, a few of them were used operationally by 2 Hiko Chutai, 50 Hiko Sentai (2nd Air Squadron, 50th Air Regiment) in the defense of Rangoon. Testimony of this is given by Yasuhiko Kuroe, a member of the 64 Hiko Sentai. In his memoirs, he says one Japanese-operated P-40 was shot down in error by a friendly Mitsubishi Ki-21 "Sally" over Rangoon.
The P-40 was used by over two dozen countries during and after the war. The P-40 was used by Brazil, Egypt, Finland and Turkey. The last P-40s in military service, used by the Brazilian Air Force (FAB), were retired in 1954.
In the air war over Finland, several Soviet P-40s were shot down or had to crash-land due to other reasons. The Finns, short of good aircraft, collected these and managed to repair one P-40M, P-40M-10-CU 43–5925, white 23, which received Finnish Air Force serial number KH-51 (KH denoting "Kittyhawk", as the British designation of this type was Kittyhawk III). This aircraft was attached to an operational squadron HLeLv 32 of the Finnish Air Force, but lack of spares kept it on the ground, with the exception of a few evaluation flights.
Several P-40Ns were used by the Royal Netherlands East Indies Army Air Force with No. 120 (Netherlands East Indies) Squadron RAAF against the Japanese before being used during the fighting in Indonesia until February 1949.
This new liquid-cooled engine fighter had a radiator mounted under the rear fuselage but the prototype XP-40 was later modified and the radiator was moved forward under the engine.
On 11 May 2012, the remains of a crashed P-40 Kittyhawk (ET574) that had run out of fuel was found in the Egyptian Sahara desert. No trace of the pilot has been found to date. Due to the extreme arid conditions, little corrosion of the metal surfaces occurred. The conditions in which it was found are similar to those preferred for aircraft boneyards. An attempt has been made to bring back the Kittyhawk to Great Britain with the RAF Museum paying a salvage team with Supermarine Spitfire PK664 to recover the Kittyhawk. This turned out to be unsuccessful as the Kittyhawk is now being displayed outside at a military museum at El Alamein, having received a poor quality restoration, and PK664 being reported lost.
Of the 13,738 P-40s built, only 28 remain airworthy, with three of them being converted to dual-controls/dual-seat configuration. Approximately 13 aircraft are on static display and another 36 airframes are under restoration for either display or flight.
Data from Curtiss Aircraft 1907–1947, America's hundred thousand : the U.S. production fighter aircraft of World War II
General characteristics
Performance
Armament
Related development
Aircraft of comparable role, configuration, and era
Related lists | [
{
"paragraph_id": 0,
"text": "The Curtiss P-40 Warhawk is an American single-engined, single-seat, all-metal fighter-bomber that first flew in 1938. The P-40 design was a modification of the previous Curtiss P-36 Hawk which reduced development time and enabled a rapid entry into production and operational service. The Warhawk was used by most Allied powers during World War II, and remained in frontline service until the end of the war. It was the third most-produced American fighter of World War II, after the P-51 and P-47; by November 1944, when production of the P-40 ceased, 13,738 had been built, all at Curtiss-Wright Corporation's main production facilities in Buffalo, New York.",
"title": ""
},
{
"paragraph_id": 1,
"text": "P-40 Warhawk was the name the United States Army Air Corps gave the plane, and after June 1941, the USAAF adopted the name for all models, making it the official name in the U.S. for all P-40s. The British Commonwealth and Soviet air forces used the name Tomahawk for models equivalent to the original P-40, P-40B, and P-40C, and the name Kittyhawk for models equivalent to the P-40D and all later variants. P-40s first saw combat with the British Commonwealth squadrons of the Desert Air Force in the Middle East and North African campaigns, during June 1941. No. 112 Squadron Royal Air Force, was among the first to operate Tomahawks in North Africa and the unit was the first Allied military aviation unit to feature the \"shark mouth\" logo, copying similar markings on some Luftwaffe Messerschmitt Bf 110 twin-engine fighters.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The P-40's liquid-cooled, supercharged Allison V-1710 V-12 engine's lack of a two-speed supercharger made it inferior to Luftwaffe fighters such as the Messerschmitt Bf 109 or the Focke-Wulf Fw 190 in high-altitude combat and it was rarely used in operations in Northwest Europe. However, between 1941 and 1944, the P-40 played a critical role with Allied air forces in three major theaters: North Africa, the Southwest Pacific, and China. It also had a significant role in the Middle East, Southeast Asia, Eastern Europe, Alaska and Italy. The P-40's performance at high altitudes was not as important in those theaters, where it served as an air superiority fighter, bomber escort and fighter-bomber.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Although it gained a postwar reputation as a mediocre design, suitable only for close air support, more recent research including scrutiny of the records of Allied squadrons indicates that this was not the case; the P-40 performed surprisingly well as an air superiority fighter, at times suffering severe losses, but also inflicting a very heavy toll on enemy aircraft. Based on war-time victory claims, over 200 Allied fighter pilots – from the UK, Australia, New Zealand, Canada, South Africa, the US and the Soviet Union – became aces flying the P-40. These included at least 20 double aces, mostly over North Africa, China, Burma and India, the South West Pacific and Eastern Europe. The P-40 offered the additional advantages of low cost and durability, which kept it in production as a ground-attack aircraft long after it was obsolescent as a fighter.",
"title": ""
},
{
"paragraph_id": 4,
"text": "On 14 October 1938, Curtiss test pilot Edward Elliott flew the prototype XP-40 on its first flight in Buffalo. The XP-40 was the 10th production Curtiss P-36 Hawk, with its Pratt & Whitney R-1830 Twin Wasp 14-cylinder air-cooled radial engine replaced at the direction of Chief Engineer Don R. Berlin by a liquid-cooled, supercharged Allison V-1710 V-12 engine. The first prototype placed the glycol coolant radiator in an underbelly position on the fighter, just aft of the wing's trailing edge. USAAC Fighter Projects Officer Lieutenant Benjamin S. Kelsey flew this prototype some 300 miles in 57 minutes, approximately 315 miles per hour (507 km/h). Hiding his disappointment, he told reporters that future versions would likely go 100 miles per hour (160 km/h) faster. Kelsey was interested in the Allison engine because it was sturdy and dependable, and it had a smooth, predictable power curve. The V-12 engine offered as much power as a radial engine but had a smaller frontal area and allowed a more streamlined cowl than an aircraft with a radial engine, promising a theoretical 5% increase in top speed.",
"title": "Design and development"
},
{
"paragraph_id": 5,
"text": "Curtiss engineers worked to improve the XP-40's speed by moving the radiator forward in steps. Seeing little gain, Kelsey ordered the aircraft to be evaluated in a NACA wind tunnel to identify solutions for better aerodynamic qualities. From 28 March to 11 April 1939, the prototype was studied by NACA. Based on the data obtained, Curtiss moved the glycol coolant radiator forward to the chin; its new air scoop also accommodated the oil cooler air intake. Other improvements to the landing gear doors and the exhaust manifold combined to give performance that was satisfactory to the USAAC. Without beneficial tail winds, Kelsey flew the XP-40 from Wright Field back to Curtiss's plant in Buffalo at an average speed of 354 mph (570 km/h). Further tests in December 1939 proved the fighter could reach 366 mph (589 km/h).",
"title": "Design and development"
},
{
"paragraph_id": 6,
"text": "An unusual production feature was a special truck rig to speed delivery at the main Curtiss plant in Buffalo, New York. The rig moved the newly built P-40s in two main components, the main wing and the fuselage, the eight miles from the plant to the airport where the two units were mated for flight and delivery.",
"title": "Design and development"
},
{
"paragraph_id": 7,
"text": "The P-40 was conceived as a pursuit aircraft and was agile at low and medium altitudes but suffered from a lack of power at higher altitudes. At medium and high speeds it was one of the tightest-turning early monoplane designs of the war, and it could out turn most opponents it faced in North Africa and the Russian Front. In the Pacific Theater it was out-turned at lower speeds by the lightweight fighters A6M Zero and Nakajima Ki-43 \"Oscar\". The American Volunteer Group Commander Claire Chennault advised against prolonged dog-fighting with the Japanese fighters due to speed reduction favoring the Japanese.",
"title": "Design and development"
},
{
"paragraph_id": 8,
"text": "Allison's V-1710 engines produced 1,040 hp (780 kW) at sea level and 14,000 ft (4,300 m). This was not powerful compared with contemporary fighters, and the early P-40 variants' top speeds were only average. The single-stage, single-speed supercharger meant that the P-40 was a poor high-altitude fighter. Later versions, with 1,200 hp (890 kW) Allisons or more powerful 1,400 hp Packard Merlin engines were more capable. Climb performance was fair to poor, depending on the subtype. Dive acceleration was good and dive speed was excellent. The highest-scoring P-40 ace, Clive Caldwell (RAAF), who claimed 22 of his 28½ kills in the type, said that the P-40 had \"almost no vices\", although \"it was a little difficult to control in terminal velocity\". The P-40 had one of the fastest maximum dive speeds of any fighter of the early war period, and good high-speed handling.",
"title": "Design and development"
},
{
"paragraph_id": 9,
"text": "The P-40 tolerated harsh conditions and a variety of climates. Its semi-modular design was easy to maintain in the field. It lacked innovations such as boosted ailerons or automatic leading edge slats, but its strong structure included a five-spar wing, which enabled P-40s to pull high-G turns and survive some midair collisions. Intentional ramming attacks against enemy aircraft were occasionally recorded as victories by the Desert Air Force and Soviet Air Forces. Caldwell said P-40s \"would take a tremendous amount of punishment, violent aerobatics as well as enemy action\". Operational range was good by early war standards and was almost double that of the Supermarine Spitfire or Messerschmitt Bf 109, although inferior to the Mitsubishi A6M Zero, Nakajima Ki-43 and Lockheed P-38 Lightning.",
"title": "Design and development"
},
{
"paragraph_id": 10,
"text": "Caldwell found the P-40C Tomahawk's armament of two .50-inch (13 mm) Browning AN/M2 \"light-barrel\" dorsal nose-mount synchronized machine guns and two .303-inch (7.7 mm) Browning machine guns in each wing to be inadequate. This was improved with the P-40D (Kittyhawk I) which abandoned the synchronized gun mounts and instead had two .50-inch (13 mm) guns in each wing, although Caldwell still preferred the earlier Tomahawk in other respects. The D had armor around the engine and the cockpit, which enabled it to withstand considerable damage. This allowed Allied pilots in Asia and the Pacific to attack Japanese fighters head on, rather than try to out-turn and out-climb their opponents. Late-model P-40s were well armored. Visibility was adequate, although hampered by a complex windscreen frame, and completely blocked to the rear in early models by a raised turtledeck. Poor ground visibility and relatively narrow landing gear track caused many losses on the ground.",
"title": "Design and development"
},
{
"paragraph_id": 11,
"text": "Curtiss tested a follow-on design, the Curtiss XP-46, but it offered little improvement over newer P-40 models and was cancelled.",
"title": "Design and development"
},
{
"paragraph_id": 12,
"text": "In April 1939, the U.S. Army Air Corps, having witnessed the new, sleek, high-speed, in-line-engined fighters of the European air forces, placed the largest fighter order it had ever made for 524 P-40s.",
"title": "Operational history"
},
{
"paragraph_id": 13,
"text": "An early order came from the French Armée de l'Air, which was already operating P-36s. The Armée de l'Air ordered 100 (later the order was increased to 230) as the Hawk 81A-1 but the French were defeated before the aircraft had left the factory and the aircraft were diverted to British and Commonwealth service (as the Tomahawk I), in some cases complete with metric flight instruments.",
"title": "Operational history"
},
{
"paragraph_id": 14,
"text": "In late 1942, as French forces in North Africa split from the Vichy government to side with the Allies, U.S. forces transferred P-40Fs from 33rd FG to GC II/5, a squadron that was historically associated with the Lafayette Escadrille. GC II/5 used its P-40Fs and Ls in combat in Tunisia and later for patrol duty off the Mediterranean coast until mid-1944, when they were replaced by Republic P-47D Thunderbolts.",
"title": "Operational history"
},
{
"paragraph_id": 15,
"text": "In all, 18 Royal Air Force (RAF) squadrons, four Royal Canadian Air Force (RCAF), three South African Air Force (SAAF) and two Royal Australian Air Force (RAAF) squadrons serving with RAF formations, used P-40s. The first units to convert were Hawker Hurricane squadrons of the Desert Air Force (DAF), in early 1941. The first Tomahawks delivered came without armor, bulletproof windscreens or self-sealing fuel tanks, which were installed in subsequent shipments. Pilots used to British fighters sometimes found it difficult to adapt to the P-40's rear-folding landing gear, which was more prone to collapse than the lateral-folding landing gear of the Hurricane or Supermarine Spitfire. In contrast to the \"three-point landing\" commonly employed with British types, P-40 pilots were obliged to use a \"wheels landing\": a longer, low angle approach that touched down on the main wheels first.",
"title": "Operational history"
},
{
"paragraph_id": 16,
"text": "Testing showed the aircraft did not have the performance needed for use in Northwest Europe at high-altitude, due to the service ceiling limitation. Spitfires used in the theater operated at heights around 30,000 ft (9,100 m), while the P-40's Allison engine, with its single-stage, low altitude rated supercharger, worked best at 15,000 ft (4,600 m) or lower. When the Tomahawk was used by Allied units based in the UK from February 1941, this limitation relegated the Tomahawk to low-level reconnaissance with RAF Army Cooperation Command and only No. 403 Squadron RCAF was used in the fighter role for a mere 29 sorties, before being replaced by Spitfires. Air Ministry deemed the P-40 unsuitable for the theater. UK P-40 squadrons from mid-1942 re-equipped with aircraft such as Mustangs",
"title": "Operational history"
},
{
"paragraph_id": 17,
"text": "The Tomahawk was superseded in North Africa by the more powerful Kittyhawk (\"D\"-mark onwards) types from early 1942, though some Tomahawks remained in service until 1943. Kittyhawks included many improvements and were the DAF's air superiority fighter for the critical first few months of 1942, until \"tropicalised\" Spitfires were available. DAF units received nearly 330 Packard V-1650 Merlin-powered P-40Fs, called Kittyhawk IIs, most of which went to the USAAF and the majority of the 700 \"lightweight\" L models, also powered by the Packard Merlin, in which the armament was reduced to four .50 in (12.7 mm) Brownings (Kittyhawk IIA). The DAF also received some 21 of the later P-40K and the majority of the 600 P-40Ms built; these were known as Kittyhawk IIIs. The \"lightweight\" P-40Ns (Kittyhawk IV) arrived from early 1943 and were used mostly as fighter-bombers. From July 1942 until mid-1943, elements of the U.S. 57th Fighter Group (57th FG) were attached to DAF P-40 units. The British government also donated 23 P-40s to the Soviet Union.",
"title": "Operational history"
},
{
"paragraph_id": 18,
"text": "Tomahawks and Kittyhawks bore the brunt of Luftwaffe and Regia Aeronautica fighter attacks during the North African campaign. The P-40s were considered superior to the Hurricane, which they replaced as the primary fighter of the Desert Air Force.",
"title": "Operational history"
},
{
"paragraph_id": 19,
"text": "I would evade being shot at accurately by pulling so much g-force...that you could feel the blood leaving the head and coming down over your eyes... And you would fly like that for as long as you could, knowing that if anyone was trying to get on your tail they were going through the same bleary vision that you had and you might get away... I had deliberately decided that any deficiency the Kittyhawk had was offset by aggression. And I'd done a little bit of boxing – I beat much better opponents simply by going for [them]. And I decided to use that in the air. And it paid off.",
"title": "Operational history"
},
{
"paragraph_id": 20,
"text": "The P-40 initially proved quite effective against Axis aircraft and contributed to a slight shift of advantage in the Allies' favor. The gradual replacement of Hurricanes by the Tomahawks and Kittyhawks led to the Luftwaffe accelerating retirement of the Bf 109E and introducing the newer Bf 109F; these were to be flown by the veteran pilots of elite Luftwaffe units, such as Jagdgeschwader 27 (JG27), in North Africa. The P-40 was generally considered roughly equal or slightly superior to the Bf 109 at low altitude but inferior at high altitude, particularly against the Bf 109F. Most air combat in North Africa took place well below 16,000 ft (4,900 m), negating much of the Bf 109's superiority. The P-40 usually had an advantage over the Bf 109 in turning, dive speed and structural strength, was roughly equal in firepower but was slightly inferior in speed and outclassed in rate of climb and operational ceiling.",
"title": "Operational history"
},
{
"paragraph_id": 21,
"text": "The P-40 was generally superior to early Italian fighter types, such as the Fiat G.50 Freccia and the Macchi C.200. Its performance against the Macchi C.202 Folgore elicited varying opinions. Some observers consider the Macchi C.202 superior. Caldwell, who scored victories against them in his P-40, felt that the Folgore was superior to the P-40 and the Bf 109 except that its armament of only two or four machine guns was inadequate. Other observers considered the two equally matched or favored the Folgore in aerobatic performance, such as turning radius. The aviation historian Walter J. Boyne wrote that over Africa, the P-40 and the Folgore were \"equivalent\". Against its lack of high-altitude performance, the P-40 was considered to be a stable gun platform and its rugged construction meant that it was able to operate from rough front line airstrips with a good rate of serviceability.",
"title": "Operational history"
},
{
"paragraph_id": 22,
"text": "The earliest victory claims by P-40 pilots include Vichy French aircraft, during the 1941 Syria-Lebanon campaign, against Dewoitine D.520s, a type often considered to be the best French fighter of the war. The P-40 was deadly against Axis bombers in the theater, as well as against the Bf 110 twin-engine fighter. In June 1941, Caldwell, of 250 Squadron in Egypt, flying as flying Officer (F/O) Jack Hamlyn's wingman, recorded in his log book that he was involved in the first air combat victory for the P-40. This was a CANT Z.1007 bomber on 6 June. The claim was not officially recognized, as the crash of the CANT was not witnessed. The first official victory occurred on 8 June, when Hamlyn and Flight Sergeant (Flt Sgt) Tom Paxton destroyed a CANT Z.1007 from 211 Squadriglia of the Regia Aeronautica, over Alexandria. Several days later, the Tomahawk was in action over Syria with No. 3 Squadron RAAF, which claimed 19 aerial victories over Vichy French aircraft during June and July 1941, for the loss of one P-40 (and one lost to ground fire).",
"title": "Operational history"
},
{
"paragraph_id": 23,
"text": "Some DAF units initially failed to use the P-40's strengths or used outdated defensive tactics such as the Lufbery circle. The superior climb rate of the Bf 109 enabled fast, swooping attacks, neutralizing the advantages offered by conventional defensive tactics. Various new formations were tried by Tomahawk units from 1941 to 1942, including \"fluid pairs\" (similar to the German rotte); the Thach Weave (one or two \"weavers\") at the back of a squadron in formation and whole squadrons bobbing and weaving in loose formations. Werner Schröer, who was credited with destroying 114 Allied aircraft in only 197 combat missions, referred to the latter formation as \"bunches of grapes\", because he found them so easy to pick off. The leading German expert in North Africa, Hans-Joachim Marseille, claimed as many as 101 P-40s during his career.",
"title": "Operational history"
},
{
"paragraph_id": 24,
"text": "From 26 May 1942, Kittyhawk units operated primarily as fighter-bomber units, giving rise to the nickname \"Kittybomber\". As a result of this change in role and because DAF P-40 squadrons were frequently used in bomber escort and close air support missions, they suffered relatively high losses; many Desert Air Force P-40 pilots were caught flying low and slow by marauding Bf 109s.",
"title": "Operational history"
},
{
"paragraph_id": 25,
"text": "Caldwell believed that Operational Training Units did not properly prepare pilots for air combat in the P-40 and as a commander, stressed the importance of training novice pilots properly.",
"title": "Operational history"
},
{
"paragraph_id": 26,
"text": "Competent pilots who took advantage of the P-40's strengths were effective against the best of the Luftwaffe and Regia Aeronautica. In August 1941, Caldwell was attacked by two Bf 109s, one of them piloted by German ace Werner Schröer. Although Caldwell was wounded three times and his Tomahawk was hit by more than 100 7.92 mm (0.312 in) bullets and five 20 mm cannon shells, Caldwell shot down Schröer's wingman and returned to base. Some sources also claim that in December 1941, Caldwell killed a prominent German Experte, Erbo von Kageneck (69 kills), while flying a P-40. Caldwell's victories in North Africa included 10 Bf 109s and two Macchi C.202s. Billy Drake of 112 Squadron was the leading British P-40 ace with 13 victories. James \"Stocky\" Edwards (RCAF), who achieved 12 kills in the P-40 in North Africa, shot down German ace Otto Schulz (51 kills) while flying a Kittyhawk with No. 260 Squadron RAF. Caldwell, Drake, Edwards and Nicky Barr were among at least a dozen pilots who achieved ace status twice over while flying the P-40. A total of 46 British Commonwealth pilots became aces in P-40s, including seven double aces.",
"title": "Operational history"
},
{
"paragraph_id": 27,
"text": "The Flying Tigers, known officially as the 1st American Volunteer Group (AVG), were a unit of the Chinese Air Force, recruited from amongst U.S. Navy, Marine Corps and Army aviators and ground crew.",
"title": "Operational history"
},
{
"paragraph_id": 28,
"text": "Chennault received crated Model Bs which his airmen assembled in Burma at the end of 1941, adding self-sealing fuel tanks and a second pair of wing guns, such that the aircraft became a hybrid of B and C models. These were not well-liked by their pilots: they lacked drop tanks for extra range, and there were no bomb racks on the wings. Chennault considered the liquid-cooled engine vulnerable in combat because a single bullet through the coolant system would cause the engine to overheat in minutes. The Tomahawks also had no radios, so the AVG improvised by installing a fragile radio transceiver, the RCA-7-H, which had been built for a Piper Cub. Because the plane had a single-stage low-altitude supercharger, its effective ceiling was about 25,000 feet (7,600 m). The most critical problem was the lack of spare parts; the only source was from damaged aircraft. The planes were viewed as cast-offs that no one else wanted, dangerous and difficult to fly. But the pilots did appreciate some of the planes' features. There were two heavy sheets of steel behind the pilot's head and back that offered solid protection, and overall the planes were ruggedly constructed.",
"title": "Operational history"
},
{
"paragraph_id": 29,
"text": "Compared to opposing Japanese fighters, the P-40B's strengths were that it was sturdy, well armed, faster in a dive and possessed an excellent rate of roll. While the P-40s could not match the maneuverability of the Japanese Army air arm's Nakajima Ki-27s and Ki-43s, nor the much more famous Zero naval fighter in slow, turning dogfights, at higher speeds the P-40s were more than a match. AVG leader Claire Chennault trained his pilots to use the P-40's particular performance advantages. The P-40 had a higher dive speed than any Japanese fighter aircraft of the early war years, for example, and could exploit so-called \"boom-and-zoom\" tactics. The AVG was highly successful, and its feats were widely publicized by an active cadre of international journalists to boost sagging public morale at home. According to its official records, in just 6+1⁄2 months, the Flying Tigers destroyed 297 enemy aircraft for the loss of just four of its own in air-to-air combat.",
"title": "Operational history"
},
{
"paragraph_id": 30,
"text": "In the spring of 1942, the AVG received a small number of Model E's. Each came equipped with a radio, six .50-caliber machine guns, and auxiliary bomb racks that could hold 35-lb fragmentation bombs. Chennault's armorer added bomb racks for 570-lb Russian bombs, which the Chinese had in abundance. These planes were used in the battle of the Salween River Gorge in late May 1942, which kept the Japanese from entering China from Burma and threatening Kunming. Spare parts, however, remained in short supply. \"Scores of new planes...were now in India, and there they stayed—in case the Japanese decided to invade... the AVG was lucky to get a few tires and spark plugs with which to carry on its daily war.\"",
"title": "Operational history"
},
{
"paragraph_id": 31,
"text": "China received 27 P-40E models in early 1943. These were assigned to squadrons of the 4th Air Group.",
"title": "Operational history"
},
{
"paragraph_id": 32,
"text": "A total of 15 USAAF pursuit/fighter groups (FG), along with other pursuit/fighter squadrons and a few tactical reconnaissance (TR) units, operated the P-40 during 1941–45. As was also the case with the Bell P-39 Airacobra, many USAAF officers considered the P-40 exceptional but it was gradually replaced by the Lockheed P-38 Lightning, the Republic P-47 Thunderbolt and the North American P-51 Mustang. The bulk of the fighter operations by the USAAF in 1942–43 were borne by the P-40 and the P-39. In the Pacific, these two fighters, along with the U.S. Navy Grumman F4F Wildcat, contributed more than any other U.S. types to breaking Japanese air power during this critical period.",
"title": "Operational history"
},
{
"paragraph_id": 33,
"text": "The P-40 was the main USAAF fighter aircraft in the South West Pacific and Pacific Ocean theaters during 1941–42. At Pearl Harbor and in the Philippines, USAAF P-40 squadrons suffered crippling losses on the ground and in the air to Japanese fighters such as the A6M Zero and Ki-43 Hayabusa respectively. During the attack on Pearl Harbor, most of the USAAF fighters were P-40Bs, the majority of which were destroyed. However, a few P-40s managed to get in the air and shoot down several Japanese aircraft, most notably by George Welch and Kenneth Taylor.",
"title": "Operational history"
},
{
"paragraph_id": 34,
"text": "In the Dutch East Indies campaign, the 17th Pursuit Squadron (Provisional), formed from USAAF pilots evacuated from the Philippines, claimed 49 Japanese aircraft destroyed, for the loss of 17 P-40s The seaplane tender USS Langley was sunk by Japanese airplanes while delivering P-40s to Tjilatjap, Java. In the Solomon Islands and New Guinea Campaigns and the air defence of Australia, improved tactics and training allowed the USAAF to better use the strengths of the P-40. Due to aircraft fatigue, scarcity of spare parts and replacement problems, the US Fifth Air Force and Royal Australian Air Force created a joint P-40 management and replacement pool on 30 July 1942 and many P-40s went back and forth between the air forces.",
"title": "Operational history"
},
{
"paragraph_id": 35,
"text": "The 49th Fighter Group was in action in the Pacific from the beginning of the war. Robert M. DeHaven scored 10 kills (of 14 overall) in the P-40 with the 49th FG. He compared the P-40 favorably with the P-38:",
"title": "Operational history"
},
{
"paragraph_id": 36,
"text": "The 8th, 15th, 18th, 24th, 49th, 343rd and 347th PGs/FGs, flew P-40s in the Pacific theaters between 1941 and 1945, with most units converting to P-38s from 1943 to 1944. In 1945, the 71st Reconnaissance Group employed them as armed forward air controllers during ground operations in the Philippines, until it received delivery of P-51s. They claimed 655 aerial victories.",
"title": "Operational history"
},
{
"paragraph_id": 37,
"text": "Contrary to conventional wisdom, with sufficient altitude, the P-40 could turn with the A6M and other Japanese fighters, using a combination of a nose-down vertical turn with a bank turn, a technique known as a low yo-yo. Robert DeHaven describes how this tactic was used in the 49th Fighter group:",
"title": "Operational history"
},
{
"paragraph_id": 38,
"text": "USAAF and Chinese P-40 pilots performed well in this theater against many Japanese types such as the Ki-43, Nakajima Ki-44 \"Tojo\" and the Zero. The P-40 remained in use in the China Burma India Theater (CBI) until 1944 and was reportedly preferred over the P-51 Mustang by some US pilots flying in China. The American Volunteer Group (Flying Tigers) was integrated into the USAAF as the 23rd Fighter Group in June 1942. The unit continued to fly newer model P-40s until 1944, achieving a high kill-to-loss ratio. In the Battle of the Salween River Gorge of May 1942 the AVG used the P-40E model equipped with wing racks that could carry six 35-pound fragmentation bombs and Chennault's armorer developed belly racks to carry Russian 570-pound bombs, which the Chinese had in large quantity.",
"title": "Operational history"
},
{
"paragraph_id": 39,
"text": "Units arriving in the CBI after the AVG in the 10th and 14th Air Forces continued to perform well with the P-40, claiming 973 kills in the theater, or 64.8 percent of all enemy aircraft shot down. Aviation historian Carl Molesworth stated that \"...the P-40 simply dominated the skies over Burma and China. They were able to establish air superiority over free China, northern Burma and the Assam valley of India in 1942, and they never relinquished it.\" The 3rd, 5th, 51st and 80th FGs, along with the 10th TRS, operated the P-40 in the CBI. CBI P-40 pilots used the aircraft very effectively as a fighter-bomber. The 80th Fighter Group in particular used its so-called B-40 (P-40s carrying 1,000-pound high-explosive bombs) to destroy bridges and kill bridge repair crews, sometimes demolishing their target with one bomb. At least 40 U.S. pilots reached ace status while flying the P-40 in the CBI.",
"title": "Operational history"
},
{
"paragraph_id": 40,
"text": "On 14 August 1942, the first confirmed victory by a USAAF unit over a German aircraft in World War II was achieved by a P-40C pilot. 2nd Lt Joseph D. Shaffer, of the 33rd Fighter Squadron, intercepted a Focke-Wulf Fw 200C-3 maritime patrol aircraft that overflew his base at Reykjavík, Iceland. Shaffer damaged the Fw 200, which was finished off by a P-38F. Warhawks were used extensively in the Mediterranean and Middle East theatre of World War II by USAAF units, including the 33rd, 57th, 58th, 79th, 324th and 325th Fighter Groups. While the P-40 suffered heavy losses in the MTO, many USAAF P-40 units achieved high kill-to-loss ratios against Axis aircraft; the 324th FG scored better than a 2:1 ratio in the MTO. In all, 23 U.S. pilots became aces in the MTO on the P-40, most of them during the first half of 1943.",
"title": "Operational history"
},
{
"paragraph_id": 41,
"text": "P-40 pilots from the 57th FG were the first USAAF fliers to see action in the MTO, while attached to Desert Air Force Kittyhawk squadrons, from July 1942. The 57th was also the main unit involved in the \"Palm Sunday Massacre\", on 18 April 1943. Decoded Ultra signals revealed a plan for a large formation of Junkers Ju 52 transports to cross the Mediterranean, escorted by German and Italian fighters. Between 1630 and 1830 hours, all wings of the group were engaged in an intensive effort against the enemy air transports. Of the four Kittyhawk wings, three had left the patrol area before a convoy of a 100+ enemy transports were sighted by 57th FG, which tallied 74 aircraft destroyed. The group was last in the area, and intercepted the Ju 52s escorted by large numbers of Bf 109s, Bf 110s and Macchi C.202s. The group claimed 58 Ju 52s, 14 Bf 109s and two Bf 110s destroyed, with several probables and damaged. Between 20 and 40 of the Axis aircraft landed on the beaches around Cap Bon to avoid being shot down; six Allied fighters were lost, five of them P-40s.",
"title": "Operational history"
},
{
"paragraph_id": 42,
"text": "On 22 April, in Operation Flax, a similar force of P-40s attacked a formation of 14 Messerschmitt Me 323 Gigant (\"Giant\") six-engine transports, covered by seven Bf 109s from II./JG 27. All the transports were shot down, for a loss of three P-40s. The 57th FG was equipped with the Curtiss fighter until early 1944, during which time they were credited with at least 140 air-to-air kills. On 23 February 1943, during Operation Torch, the pilots of the 58th FG flew 75 P-40Ls off the aircraft carrier USS Ranger to the newly captured Vichy French airfield, Cazas, near Casablanca, in French Morocco. The aircraft supplied the 33rd FG and the pilots were reassigned.",
"title": "Operational history"
},
{
"paragraph_id": 43,
"text": "The 325th FG (known as the \"Checkertail Clan\") flew P-40s in the MTO and was credited with at least 133 air-to-air kills from April–October 1943, of which 95 were Bf 109s and 26 were Macchi C.202s, for the loss of 17 P-40s in combat. The 325th FG historian Carol Cathcart wrote:",
"title": "Operational history"
},
{
"paragraph_id": 44,
"text": "on 30 July, 20 P-40s of the 317th [Fighter Squadron] ... took off on a fighter sweep ... over Sardinia. As they turned to fly south over the west part of the island, they were attacked near Sassari... The attacking force consisted of 25 to 30 Bf 109s and Macchi C.202s... In the brief, intense battle that occurred ... [the 317th claimed] 21 enemy aircraft.",
"title": "Operational history"
},
{
"paragraph_id": 45,
"text": "Cathcart wrote that Lt. Robert Sederberg assisted a comrade being attacked by five Bf 109s, destroyed at least one German aircraft, and may have shot down as many as five. Sederberg was shot down and became a prisoner of war.",
"title": "Operational history"
},
{
"paragraph_id": 46,
"text": "A famous African-American unit, the 99th FS, better known as the \"Tuskegee Airmen\" or \"Redtails\", flew P-40s in stateside training and for their initial eight months in the MTO. On 9 June 1943, they became the first African-American fighter pilots to engage enemy aircraft, over Pantelleria, Italy. A single Focke-Wulf Fw 190 was reported damaged by Lieutenant Willie Ashley Jr. On 2 July the squadron claimed its first verified kill; a Fw 190 destroyed by Captain Charles Hall. The 99th continued to score with P-40s until February 1944, when they were assigned P-39s and P-51 Mustangs.",
"title": "Operational history"
},
{
"paragraph_id": 47,
"text": "The much-lightened P-40L was most heavily used in the MTO, primarily by U.S. pilots. Many US pilots stripped down their P-40s even further to improve performance, often removing two or more of the wing guns from the P-40F/L.",
"title": "Operational history"
},
{
"paragraph_id": 48,
"text": "The Kittyhawk was the main fighter used by the RAAF in World War II, in greater numbers than the Spitfire. Two RAAF squadrons serving with the Desert Air Force, No. 3 and No. 450 Squadrons, were the first Australian units to be assigned P-40s. Other RAAF pilots served with RAF or SAAF P-40 squadrons in the theater.",
"title": "Operational history"
},
{
"paragraph_id": 49,
"text": "Many RAAF pilots achieved high scores in the P-40. At least five reached \"double ace\" status: Clive Caldwell, Nicky Barr, John Waddy, Bob Whittle (11 kills each) and Bobby Gibbes (10 kills) in the Middle East, North African and/or New Guinea campaigns. In all, 18 RAAF pilots became aces while flying P-40s.",
"title": "Operational history"
},
{
"paragraph_id": 50,
"text": "Nicky Barr, like many Australian pilots, considered the P-40 a reliable mount: \"The Kittyhawk became, to me, a friend. It was quite capable of getting you out of trouble more often than not. It was a real warhorse.\"",
"title": "Operational history"
},
{
"paragraph_id": 51,
"text": "At the same time as the heaviest fighting in North Africa, the Pacific War was also in its early stages, and RAAF units in Australia were completely lacking in suitable fighter aircraft. Spitfire production was being absorbed by the war in Europe; P-38s were trialled, but were difficult to obtain; Mustangs had not yet reached squadrons anywhere, and Australia's tiny and inexperienced aircraft industry was geared towards larger aircraft. USAAF P-40s and their pilots originally intended for the U.S. Far East Air Force in the Philippines, but diverted to Australia as a result of Japanese naval activity were the first suitable fighter aircraft to arrive in substantial numbers. By mid-1942, the RAAF was able to obtain some USAAF replacement shipments.",
"title": "Operational history"
},
{
"paragraph_id": 52,
"text": "RAAF Kittyhawks played a crucial role in the South West Pacific theater. They fought on the front line as fighters during the critical early years of the Pacific War, and the durability and bomb-carrying abilities (1,000 lb/454 kg) of the P-40 also made it ideal for the ground attack role. During the Battle of Port Moresby RAAF 75 destroyed or damaged some 33 Japanese aircraft of various types. With another 30 probables. General Henry H. Arnold said of No 75 squadron: \"Victory in the entire air war against Japan can be traced back to the actions which took place from that dusty strip at Port Moresby in early 1942.\" For example, 75, and 76 Squadrons played a critical role during the Battle of Milne Bay, fending off Japanese aircraft and providing effective close air support for the Australian infantry, negating the initial Japanese advantage in light tanks and sea power. The Kittyhawks fired \"nearly 200,000 rounds of half-inch ammunition\" during the course of the battle.",
"title": "Operational history"
},
{
"paragraph_id": 53,
"text": "The RAAF units that most used Kittyhawks in the South West Pacific were 75, 76, 77, 78, 80, 82, 84 and 86 Squadrons. These squadrons saw action mostly in the New Guinea and Borneo campaigns.",
"title": "Operational history"
},
{
"paragraph_id": 54,
"text": "Late in 1945, RAAF fighter squadrons in the South West Pacific began converting to P-51Ds. However, Kittyhawks were in use with the RAAF until the end of the war, in Borneo. In all, the RAAF acquired 841 Kittyhawks (not counting the British-ordered examples used in North Africa), including 163 P-40E, 42 P-40K, 90 P-40 M and 553 P-40N models. In addition, the RAAF ordered 67 Kittyhawks for use by No. 120 (Netherlands East Indies) Squadron (a joint Australian-Dutch unit in the South West Pacific). The P-40 was retired by the RAAF in 1947.",
"title": "Operational history"
},
{
"paragraph_id": 55,
"text": "A total of 13 Royal Canadian Air Force units operated the P-40 in the North West European or Alaskan theaters.",
"title": "Operational history"
},
{
"paragraph_id": 56,
"text": "In mid-May 1940, Canadian and US officers watched comparative tests of a XP-40 and a Spitfire, at RCAF Uplands, Ottawa. While the Spitfire was considered to have performed better, it was not available for use in Canada and the P-40 was ordered to meet home air defense requirements. In all, eight Home War Establishment Squadrons were equipped with the Kittyhawk: 72 Kittyhawk I, 12 Kittyhawk Ia, 15 Kittyhawk III and 35 Kittyhawk IV aircraft, for a total of 134 aircraft. These aircraft were mostly diverted from RAF Lend-Lease orders for service in Canada. The P-40 Kittyhawks were obtained in lieu of 144 P-39 Airacobras originally allocated to Canada but reassigned to the RAF.",
"title": "Operational history"
},
{
"paragraph_id": 57,
"text": "However, before any home units received the P-40, three RCAF Article XV squadrons operated Tomahawk aircraft from bases in the United Kingdom. No. 403 Squadron RCAF, a fighter unit, used the Tomahawk Mk II briefly before converting to Spitfires. Two Army Co-operation (close air support) squadrons: 400 and 414 Sqns trained with Tomahawks, before converting to Mustang Mk. I aircraft and a fighter/reconnaissance role. Of these, only No. 400 Squadron used Tomahawks operationally, conducting a number of armed sweeps over France in the late 1941. RCAF pilots also flew Tomahawks or Kittyhawks with other British Commonwealth units based in North Africa, the Mediterranean, South East Asia and (in at least one case) the South West Pacific.",
"title": "Operational history"
},
{
"paragraph_id": 58,
"text": "In 1942, the Imperial Japanese Navy occupied two islands, Attu and Kiska, in the Aleutians, off Alaska. RCAF home defense P-40 squadrons saw combat over the Aleutians, assisting the USAAF. The RCAF initially sent 111 Squadron, flying the Kittyhawk I, to the US base on Adak island. During the drawn-out campaign, 12 Canadian Kittyhawks operated on a rotational basis from a new, more advanced base on Amchitka,75 mi (121 km) southeast of Kiska. 14 and 111 Sqns took \"turn-about\" at the base. During a major attack on Japanese positions at Kiska on 25 September 1942, Squadron Leader Ken Boomer shot down a Nakajima A6M2-N (\"Rufe\") seaplane. The RCAF also purchased 12 P-40Ks directly from the USAAF while in the Aleutians. After the Japanese threat diminished, these two RCAF squadrons returned to Canada and eventually transferred to England without their Kittyhawks.",
"title": "Operational history"
},
{
"paragraph_id": 59,
"text": "In January 1943, a further Article XV unit, 430 Squadron was formed at RAF Hartford Bridge, England and trained on obsolete Tomahawk IIA. The squadron converted to the Mustang I before commencing operations in mid-1943.",
"title": "Operational history"
},
{
"paragraph_id": 60,
"text": "In early 1945 pilots from No. 133 Squadron RCAF, operating the P-40N out of RCAF Patricia Bay, (Victoria, British Columbia), intercepted and destroyed two Japanese balloon-bombs, which were designed to cause wildfires on the North American mainland. On 21 February, Pilot Officer E. E. Maxwell shot down a balloon, which landed on Sumas Mountain in Washington State. On 10 March, Pilot Officer J. 0. Patten destroyed a balloon near Saltspring Island, British Columbia. The last interception took place on 20 April 1945 when Pilot Officer P.V. Brodeur from 135 Squadron out of Abbotsford, British Columbia shot down a balloon over Vedder Mountain.",
"title": "Operational history"
},
{
"paragraph_id": 61,
"text": "The RCAF units that operated P-40s were, in order of conversion:",
"title": "Operational history"
},
{
"paragraph_id": 62,
"text": "Some Royal New Zealand Air Force (RNZAF) pilots and New Zealanders in other air forces flew British P-40s while serving with DAF squadrons in North Africa and Italy, including the ace Jerry Westenra.",
"title": "Operational history"
},
{
"paragraph_id": 63,
"text": "A total of 301 P-40s were allocated to the RNZAF under Lend-Lease, for use in the Pacific Theater, although four of these were lost in transit. The aircraft equipped 14 Squadron, 15 Squadron, 16 Squadron, 17 Squadron, 18 Squadron, 19 Squadron and 20 Squadron.",
"title": "Operational history"
},
{
"paragraph_id": 64,
"text": "RNZAF P-40 squadrons were successful in air combat against the Japanese between 1942 and 1944. Their pilots claimed 100 aerial victories in P-40s, whilst losing 20 aircraft in combat Geoff Fisken, the highest scoring British Commonwealth ace in the Pacific, flew P-40s with 15 Squadron, although half of his victories were claimed with the Brewster Buffalo.",
"title": "Operational history"
},
{
"paragraph_id": 65,
"text": "The overwhelming majority of RNZAF P-40 victories were scored against Japanese fighters, mostly Zeroes. Other victories included Aichi D3A \"Val\" dive bombers. The only confirmed twin engine claim, a Ki-21 \"Sally\" (misidentified as a G4M \"Betty\") fell to Fisken in July 1943.",
"title": "Operational history"
},
{
"paragraph_id": 66,
"text": "From late 1943 and 1944, RNZAF P-40s were increasingly used against ground targets, including the innovative use of naval depth charges as improvised high-capacity bombs. The last front line RNZAF P-40s were replaced by Vought F4U Corsairs in 1944. The P-40s were relegated to use as advanced pilot trainers.",
"title": "Operational history"
},
{
"paragraph_id": 67,
"text": "The remaining RNZAF P-40s, excluding the 20 shot down and 154 written off, were mostly scrapped at Rukuhia in 1948.",
"title": "Operational history"
},
{
"paragraph_id": 68,
"text": "The Soviet Voyenno-Vozdushnye Sily (VVS; \"Military Air Forces\") and Morskaya Aviatsiya (MA; \"Naval Air Service\") also referred to P-40s as \"Tomahawks\" and \"Kittyhawks\". In fact, the Curtiss P-40 Tomahawk / Kittyhawk was the first Allied fighter supplied to the USSR under the Lend-Lease agreement. The USSR received 247 P-40B/Cs (equivalent to the Tomahawk IIA/B in RAF service) and 2,178 P-40E, -K, -L, and -N models between 1941 and 1944. The Tomahawks were shipped from Great Britain and directly from the US, many of them arriving incomplete, lacking machine guns and even the lower half of the engine cowling. In late September 1941, the first 48 P-40s were assembled and checked in the USSR. Test flights showed some manufacturing defects: generator and oil pump gears and generator shafts failed repeatedly, which led to emergency landings. The test report indicated that the Tomahawk was inferior to Soviet \"M-105P-powered production fighters in speed and rate of climb. However, it had good short field performance, horizontal maneuverability, range, and endurance.\" Nevertheless, Tomahawks and Kittyhawks were used against the Germans. The 126th Fighter Aviation Regiment (IAP), fighting on the Western and Kalinin Fronts, were the first unit to receive the P-40. The regiment entered action on 12 October 1941. By 15 November 1941, the regiment had shot down 17 German aircraft. However, Lt (SG) Smirnov noted that the P-40 armament was sufficient for strafing enemy lines but rather ineffective in aerial combat. Another pilot, Stephan Ridny (a Hero of the Soviet Union), remarked that he had to shoot half the ammunition at 50–100 meters (165–340 ft) to shoot down an enemy aircraft.",
"title": "Operational history"
},
{
"paragraph_id": 69,
"text": "In January 1942, some 198 aircraft sorties were flown (334 flying hours) and 11 aerial engagements were conducted, in which five Bf 109s, one Ju 88, and one He 111 were downed. These statistics reveal a surprising fact: it turns out that the Tomahawk was fully capable of successful air combat with a Bf 109. The reports of pilots about the circumstances of the engagements confirm this fact. On 18 January 1942, Lieutenants S. V. Levin and I. P. Levsha (in pair) fought an engagement with seven Bf 109s and shot down two of them without loss. On 22 January, a flight of three aircraft led by Lieutenant E. E. Lozov engaged 13 enemy aircraft and shot down two Bf 109Es, again without loss. Altogether, in January, two Tomahawks were lost; one downed by German anti-aircraft artillery and one lost to Messerschmitts.",
"title": "Operational history"
},
{
"paragraph_id": 70,
"text": "The Soviets stripped down their P-40s significantly for combat, in many cases removing the wing guns altogether in P-40B/C types, for example. Soviet Air Force reports state that they liked the range and fuel capacity of the P-40, which were superior to most of the Soviet fighters, though they still preferred the P-39. Soviet pilot Nikolai G. Golodnikov recalled: \"The cockpit was vast and high. At first it felt unpleasant to sit waist-high in glass, as the edge of the fuselage was almost at waist level. But the bullet-proof glass and armored seat were strong and visibility was good. The radio was also good. It was powerful, reliable, but only on HF (high frequency). The American radios did not have hand microphones but throat microphones. These were good throat mikes: small, light and comfortable.\" The biggest complaint of some Soviet airmen was its poor climb rate and problems with maintenance, especially with burning out the engines. VVS pilots usually flew the P-40 at War Emergency Power settings while in combat, which brought acceleration and speed performance closer to that of their German rivals, but could burn out engines in a matter of weeks. Tires and batteries also failed. The fluid in the engine's radiators often froze, cracking their cores, which made the Allison engine unsuitable for operations during harsh winter conditions. During the winter of 1941, the 126th Fighter Aviation Regiment suffered from cracked radiators on 38 occasions. Often, entire regiments were reduced to a single flyable aircraft because no replacement parts were available. They also had difficulty with the more demanding requirements for fuel and oil quality of the Allison engines. A fair number of burned-out P-40s were re-engined with Soviet Klimov M-105 engines, but these performed relatively poorly and were relegated to rear area use.",
"title": "Operational history"
},
{
"paragraph_id": 71,
"text": "Actually, the P-40 could engage all Messerschmitts on equal terms, almost to the end of 1943. If you take into consideration all the characteristics of the P-40, then the Tomahawk was equal to the Bf 109F and the Kittyhawk was slightly better. Its speed and vertical and horizontal manoeuvre were good and fully competitive with enemy aircraft. Acceleration rate was a bit low, but when you got used to the engine, it was OK. We considered the P-40 a decent fighter plane.",
"title": "Operational history"
},
{
"paragraph_id": 72,
"text": "The P-40 saw the most front line use in Soviet hands in 1942 and early 1943. Deliveries over the Alaska-Siberia ALSIB ferry route began in October 1942. It was used in the northern sectors and played a significant role in the defense of Leningrad. The most numerically important types were P-40B/C, P-40E and P-40K/M. By the time the better P-40F and N types became available, production of superior Soviet fighters had increased sufficiently so that the P-40 was replaced in most Soviet Air Force units by the Lavochkin La-5 and various later Yakovlev types. In spring 1943, Lt D.I. Koval of the 45th IAP gained ace status on the North Caucasian front, shooting down six German aircraft flying a P-40. Some Soviet P-40 squadrons had good combat records. Some Soviet pilots became aces on the P-40, though not as many as on the P-39 Airacobra, the most numerous Lend-Lease fighter used by the Soviet Union. However, Soviet commanders thought the Kittyhawk significantly outclassed the Hurricane, although it was \"not in the same league as the Yak-1\".",
"title": "Operational history"
},
{
"paragraph_id": 73,
"text": "The Japanese Army captured some P-40s and later operated a number in Burma. The Japanese appear to have had as many as 10 flyable P-40Es. For a brief period in 1943, a few of them were used operationally by 2 Hiko Chutai, 50 Hiko Sentai (2nd Air Squadron, 50th Air Regiment) in the defense of Rangoon. Testimony of this is given by Yasuhiko Kuroe, a member of the 64 Hiko Sentai. In his memoirs, he says one Japanese-operated P-40 was shot down in error by a friendly Mitsubishi Ki-21 \"Sally\" over Rangoon.",
"title": "Operational history"
},
{
"paragraph_id": 74,
"text": "The P-40 was used by over two dozen countries during and after the war. The P-40 was used by Brazil, Egypt, Finland and Turkey. The last P-40s in military service, used by the Brazilian Air Force (FAB), were retired in 1954.",
"title": "Operational history"
},
{
"paragraph_id": 75,
"text": "In the air war over Finland, several Soviet P-40s were shot down or had to crash-land due to other reasons. The Finns, short of good aircraft, collected these and managed to repair one P-40M, P-40M-10-CU 43–5925, white 23, which received Finnish Air Force serial number KH-51 (KH denoting \"Kittyhawk\", as the British designation of this type was Kittyhawk III). This aircraft was attached to an operational squadron HLeLv 32 of the Finnish Air Force, but lack of spares kept it on the ground, with the exception of a few evaluation flights.",
"title": "Operational history"
},
{
"paragraph_id": 76,
"text": "Several P-40Ns were used by the Royal Netherlands East Indies Army Air Force with No. 120 (Netherlands East Indies) Squadron RAAF against the Japanese before being used during the fighting in Indonesia until February 1949.",
"title": "Operational history"
},
{
"paragraph_id": 77,
"text": "This new liquid-cooled engine fighter had a radiator mounted under the rear fuselage but the prototype XP-40 was later modified and the radiator was moved forward under the engine.",
"title": "Variants and development stages"
},
{
"paragraph_id": 78,
"text": "On 11 May 2012, the remains of a crashed P-40 Kittyhawk (ET574) that had run out of fuel was found in the Egyptian Sahara desert. No trace of the pilot has been found to date. Due to the extreme arid conditions, little corrosion of the metal surfaces occurred. The conditions in which it was found are similar to those preferred for aircraft boneyards. An attempt has been made to bring back the Kittyhawk to Great Britain with the RAF Museum paying a salvage team with Supermarine Spitfire PK664 to recover the Kittyhawk. This turned out to be unsuccessful as the Kittyhawk is now being displayed outside at a military museum at El Alamein, having received a poor quality restoration, and PK664 being reported lost.",
"title": "Surviving aircraft"
},
{
"paragraph_id": 79,
"text": "Of the 13,738 P-40s built, only 28 remain airworthy, with three of them being converted to dual-controls/dual-seat configuration. Approximately 13 aircraft are on static display and another 36 airframes are under restoration for either display or flight.",
"title": "Surviving aircraft"
},
{
"paragraph_id": 80,
"text": "Data from Curtiss Aircraft 1907–1947, America's hundred thousand : the U.S. production fighter aircraft of World War II",
"title": "Specifications (P-40E)"
},
{
"paragraph_id": 81,
"text": "General characteristics",
"title": "Specifications (P-40E)"
},
{
"paragraph_id": 82,
"text": "Performance",
"title": "Specifications (P-40E)"
},
{
"paragraph_id": 83,
"text": "Armament",
"title": "Specifications (P-40E)"
},
{
"paragraph_id": 84,
"text": "Related development",
"title": "See also"
},
{
"paragraph_id": 85,
"text": "Aircraft of comparable role, configuration, and era",
"title": "See also"
},
{
"paragraph_id": 86,
"text": "Related lists",
"title": "See also"
}
] | The Curtiss P-40 Warhawk is an American single-engined, single-seat, all-metal fighter-bomber that first flew in 1938. The P-40 design was a modification of the previous Curtiss P-36 Hawk which reduced development time and enabled a rapid entry into production and operational service. The Warhawk was used by most Allied powers during World War II, and remained in frontline service until the end of the war. It was the third most-produced American fighter of World War II, after the P-51 and P-47; by November 1944, when production of the P-40 ceased, 13,738 had been built, all at Curtiss-Wright Corporation's main production facilities in Buffalo, New York. P-40 Warhawk was the name the United States Army Air Corps gave the plane, and after June 1941, the USAAF
adopted the name for all models, making it the official name in the U.S. for all P-40s. The British Commonwealth and Soviet air forces used the name Tomahawk for models equivalent to the original P-40, P-40B, and P-40C, and the name Kittyhawk for models equivalent to the P-40D and all later variants. P-40s first saw combat with the British Commonwealth squadrons of the Desert Air Force in the Middle East and North African campaigns, during June 1941. No. 112 Squadron Royal Air Force, was among the first to operate Tomahawks in North Africa and the unit was the first Allied military aviation unit to feature the "shark mouth" logo, copying similar markings on some Luftwaffe Messerschmitt Bf 110 twin-engine fighters. The P-40's liquid-cooled, supercharged Allison V-1710 V-12 engine's lack of a two-speed supercharger made it inferior to Luftwaffe fighters such as the Messerschmitt Bf 109 or the Focke-Wulf Fw 190 in high-altitude combat and it was rarely used in operations in Northwest Europe. However, between 1941 and 1944, the P-40 played a critical role with Allied air forces in three major theaters: North Africa, the Southwest Pacific, and China. It also had a significant role in the Middle East, Southeast Asia, Eastern Europe, Alaska and Italy. The P-40's performance at high altitudes was not as important in those theaters, where it served as an air superiority fighter, bomber escort and fighter-bomber. Although it gained a postwar reputation as a mediocre design, suitable only for close air support, more recent research including scrutiny of the records of Allied squadrons indicates that this was not the case; the P-40 performed surprisingly well as an air superiority fighter, at times suffering severe losses, but also inflicting a very heavy toll on enemy aircraft. Based on war-time victory claims, over 200 Allied fighter pilots – from the UK, Australia, New Zealand, Canada, South Africa, the US and the Soviet Union – became aces flying the P-40. These included at least 20 double aces, mostly over North Africa, China, Burma and India, the South West Pacific and Eastern Europe. The P-40 offered the additional advantages of low cost and durability, which kept it in production as a ground-attack aircraft long after it was obsolescent as a fighter. | 2001-11-22T12:26:03Z | 2023-12-08T14:05:15Z | [
"Template:Cbignore",
"Template:Short description",
"Template:Circa",
"Template:Flag",
"Template:FIN",
"Template:Clear",
"Template:Portal",
"Template:Webarchive",
"Template:Authority control",
"Template:Refn",
"Template:USSR",
"Template:ISBN",
"Template:Cite news",
"Template:P-40 Warhawk family",
"Template:Convert",
"Template:NZL",
"Template:UK",
"Template:Aircraft specs",
"Template:Cite magazine",
"Template:Refend",
"Template:Curtiss aircraft",
"Template:Infobox aircraft type",
"Template:NLD",
"Template:POL",
"Template:TUR",
"Template:Dead link",
"Template:Use dmy dates",
"Template:Infobox aircraft begin",
"Template:Main",
"Template:Cite book",
"Template:Cite journal",
"Template:Refbegin",
"Template:YouTube",
"Template:USAF fighters",
"Template:ADF aircraft designations",
"Template:Clarify",
"Template:Blockquote",
"Template:Cvt",
"Template:See also",
"Template:Commons category",
"Template:Redirect",
"Template:Frac",
"Template:USS",
"Template:AUS",
"Template:Aircontent",
"Template:Reflist",
"Template:Tuskegee Airmen",
"Template:More citations needed section",
"Template:Page needed",
"Template:Cite web"
] | https://en.wikipedia.org/wiki/Curtiss_P-40_Warhawk |
7,212 | Creed | A creed, also known as a confession of faith, a symbol, or a statement of faith, is a statement of the shared beliefs of a community (often a religious community) in a form which is structured by subjects which summarize its core tenets.
The earliest known creed in Christianity, "Jesus is Lord", originated in the writings of Paul the Apostle. One of the most significant and widely used Christian creeds is the Nicene Creed, first formulated in AD 325 at the First Council of Nicaea to affirm the deity of Christ and revised at the First Council of Constantinople in AD 381 to affirm the trinity as a whole. The creed was further affirmed in 431 by the Chalcedonian Definition, which clarified the doctrine of Christ. Affirmation of this creed, which describes the Trinity, is often taken as a fundamental test of orthodoxy by many Christian denominations, and was historically purposed against Arianism. The Apostles Creed, another early creed which concisely details the trinity, virgin birth, crucifixion, and resurrection, is most popular within western Christianity, and is widely used in Christian church services.
Some Christian denominations do not use any of those creeds.
In Islamic theology, the term most closely corresponding to "creed" is ʿaqīdah (عقيدة).
The word creed is particularly used for a concise statement which is recited as part of liturgy. The term is anglicized from Latin credo "I believe", the incipit of the Latin texts of the Apostles' Creed and the Nicene Creed. A creed is sometimes referred to as a symbol in a specialized meaning of that word (which was first introduced to Late Middle English in this sense), after Latin symbolum "creed" (as in Symbolum Apostolorum = the "Apostles' Creed", a shorter version of the traditional Nicene Creed), after Greek symbolon "token, watchword".
Some longer statements of faith in the Protestant tradition are instead called "confessions of faith", or simply "confession" (as in e.g. Helvetic Confession). Within Evangelical Protestantism, the terms "doctrinal statement" or "doctrinal basis" tend to be preferred. Doctrinal statements may include positions on lectionary and translations of the Bible, particularly in fundamentalist churches of the King James Only movement.
The term creed is sometimes extended to comparable concepts in non-Christian theologies; thus the Islamic concept of ʿaqīdah (literally "bond, tie") is often rendered as "creed".
The first confession of faith established within Christianity was the Nicene Creed by the Early Church in 325. It was established to summarize the foundations of the Christian faith and to protect believers from false doctrines. Various Christian denominations from Protestantism and Evangelical Christianity have published confession of faith as a basis for fellowship among churches of the same denomination.
Many Christian denominations did not try to be too exhaustive in their confessions of faith and thus allow different opinions on some secondary topics. In addition, some churches are open to revising their confession of faith when necessary. Moreover, Baptist "confessions of faith" have often had a clause such as this from the First London Baptist Confession (Revised edition, 1646):
Also we confess that we now know but in part and that are ignorant of many things which we desire to and seek to know: and if any shall do us that friendly part to show us from the Word of God that we see not, we shall have cause to be thankful to God and to them.
Excommunication is a practice of the Bible to exclude members who do not respect the Church's confession of faith and do not want to repent. It is practiced by all Christian denominations and is intended to protect against the consequences of heretics' teachings and apostasy.
Some Christian denominations do not profess a creed. This stance is often referred to as "non-creedalism".
Anabaptism, with its origins in the 16th century Radical Reformation, spawned a number of sects and denominations that espouse "No creed, but the Bible/New Testament". This was a common reason for Anabaptist persecution from Catholic and Protestant believers. Anabaptist groups that exist today include the Amish, Hutterites, Mennonites, Schwarzenau Brethren (Church of the Brethren), River Brethren, Bruderhof, and the Apostolic Christian Church. The Seventh-day Adventist Church also shares this sentiment.
The Religious Society of Friends, the group known as the Quakers, was founded in the 17th century and is similarly non-creedal. They believe that such formal structures, “be they written words, steeple-houses or a clerical hierarchy,” cannot take the place of communal relationships and a shared connection with God.
Similar reservations about the use of creeds can be found in the Restoration Movement and its descendants, the Christian Church (Disciples of Christ), the Churches of Christ, and the Christian churches and churches of Christ. Restorationists profess "no creed but Christ".
Jehovah's Witnesses contrast "memorizing or repeating creeds" with acting to "do what Jesus said".
Several creeds originated in Christianity.
Protestant denominations are usually associated with confessions of faith, which are similar to creeds but usually longer.
Within the sects of the Latter Day Saint movement, the Articles of Faith are contained in a list which was composed by Joseph Smith as part of an 1842 letter which he sent to "Long" John Wentworth, editor of the Chicago Democrat. It is canonized along with the King James Version of the Bible, the Book of Mormon, the Doctrine & Covenants and the Pearl of Great Price, as a part of the standard works of the Church of Jesus Christ of Latter-day Saints.
In the Swiss Reformed Churches, there was a quarrel about the Apostles' Creed in the mid-19th century. As a result, most cantonal reformed churches stopped prescribing any particular creed.
In 2005, Bishop John Shelby Spong, retired Episcopal Bishop of Newark, has written that dogmas and creeds were merely "a stage in our development" and "part of our religious childhood." In his book, Sins of the Scripture, Spong wrote that "Jesus seemed to understand that no one can finally fit the holy God into his or her creeds or doctrines. That is idolatry."
In Islamic theology, the term most closely corresponding to "creed" is ʿaqīdah (عقيدة). The first such creed was written as "a short answer to the pressing heresies of the time" is known as Al-Fiqh Al-Akbar and ascribed to Abū Ḥanīfa. Two well known creeds were the Fiqh Akbar II "representative" of the al-Ash'ari, and Fiqh Akbar III, "representative" of the Ash-Shafi'i.
Iman (Arabic: الإيمان) in Islamic theology denotes a believer's religious faith. Its most simple definition is the belief in the six articles of faith, known as arkān al-īmān.
Rabbi Milton Steinberg wrote that "By its nature Judaism is averse to formal creeds which of necessity limit and restrain thought" and asserted in his book Basic Judaism (1947) that "Judaism has never arrived at a creed." The 1976 Centenary Platform of the Central Conference of American Rabbis, an organization of Reform rabbis, agrees that "Judaism emphasizes action rather than creed as the primary expression of a religious life."
Some characterize the Shema Yisrael as a creedal statement in strict monotheism embodied by prayer: "Hear O Israel, the Lord is our God, the Lord is One" (Hebrew: שמע ישראל אדני אלהינו אדני אחד; transliterated Shema Yisrael Adonai Eloheinu Adonai Echad).
A notable statement of Jewish principles of faith was drawn up by Maimonides as his 13 Principles of Faith.
Following a debate that lasted more than twenty years, the National Conference of the American Unitarian Association passed a resolution in 1894 that established the denomination as non-creedal. The Unitarians later merged with the Universalist Church of America to form the Unitarian Universalist Association (UUA). Instead of a creed, the UUA abides by a set of principles, such as “a free and responsible search for truth and meaning”. It cites diverse sources of inspiration, including Christianity, Judaism, Humanism, and Earth-centered traditions. | [
{
"paragraph_id": 0,
"text": "A creed, also known as a confession of faith, a symbol, or a statement of faith, is a statement of the shared beliefs of a community (often a religious community) in a form which is structured by subjects which summarize its core tenets.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The earliest known creed in Christianity, \"Jesus is Lord\", originated in the writings of Paul the Apostle. One of the most significant and widely used Christian creeds is the Nicene Creed, first formulated in AD 325 at the First Council of Nicaea to affirm the deity of Christ and revised at the First Council of Constantinople in AD 381 to affirm the trinity as a whole. The creed was further affirmed in 431 by the Chalcedonian Definition, which clarified the doctrine of Christ. Affirmation of this creed, which describes the Trinity, is often taken as a fundamental test of orthodoxy by many Christian denominations, and was historically purposed against Arianism. The Apostles Creed, another early creed which concisely details the trinity, virgin birth, crucifixion, and resurrection, is most popular within western Christianity, and is widely used in Christian church services.",
"title": "History"
},
{
"paragraph_id": 2,
"text": "Some Christian denominations do not use any of those creeds.",
"title": "History"
},
{
"paragraph_id": 3,
"text": "In Islamic theology, the term most closely corresponding to \"creed\" is ʿaqīdah (عقيدة).",
"title": "History"
},
{
"paragraph_id": 4,
"text": "The word creed is particularly used for a concise statement which is recited as part of liturgy. The term is anglicized from Latin credo \"I believe\", the incipit of the Latin texts of the Apostles' Creed and the Nicene Creed. A creed is sometimes referred to as a symbol in a specialized meaning of that word (which was first introduced to Late Middle English in this sense), after Latin symbolum \"creed\" (as in Symbolum Apostolorum = the \"Apostles' Creed\", a shorter version of the traditional Nicene Creed), after Greek symbolon \"token, watchword\".",
"title": "Terminology"
},
{
"paragraph_id": 5,
"text": "Some longer statements of faith in the Protestant tradition are instead called \"confessions of faith\", or simply \"confession\" (as in e.g. Helvetic Confession). Within Evangelical Protestantism, the terms \"doctrinal statement\" or \"doctrinal basis\" tend to be preferred. Doctrinal statements may include positions on lectionary and translations of the Bible, particularly in fundamentalist churches of the King James Only movement.",
"title": "Terminology"
},
{
"paragraph_id": 6,
"text": "The term creed is sometimes extended to comparable concepts in non-Christian theologies; thus the Islamic concept of ʿaqīdah (literally \"bond, tie\") is often rendered as \"creed\".",
"title": "Terminology"
},
{
"paragraph_id": 7,
"text": "The first confession of faith established within Christianity was the Nicene Creed by the Early Church in 325. It was established to summarize the foundations of the Christian faith and to protect believers from false doctrines. Various Christian denominations from Protestantism and Evangelical Christianity have published confession of faith as a basis for fellowship among churches of the same denomination.",
"title": "Christianity"
},
{
"paragraph_id": 8,
"text": "Many Christian denominations did not try to be too exhaustive in their confessions of faith and thus allow different opinions on some secondary topics. In addition, some churches are open to revising their confession of faith when necessary. Moreover, Baptist \"confessions of faith\" have often had a clause such as this from the First London Baptist Confession (Revised edition, 1646):",
"title": "Christianity"
},
{
"paragraph_id": 9,
"text": "Also we confess that we now know but in part and that are ignorant of many things which we desire to and seek to know: and if any shall do us that friendly part to show us from the Word of God that we see not, we shall have cause to be thankful to God and to them.",
"title": "Christianity"
},
{
"paragraph_id": 10,
"text": "Excommunication is a practice of the Bible to exclude members who do not respect the Church's confession of faith and do not want to repent. It is practiced by all Christian denominations and is intended to protect against the consequences of heretics' teachings and apostasy.",
"title": "Christianity"
},
{
"paragraph_id": 11,
"text": "Some Christian denominations do not profess a creed. This stance is often referred to as \"non-creedalism\".",
"title": "Christianity"
},
{
"paragraph_id": 12,
"text": "Anabaptism, with its origins in the 16th century Radical Reformation, spawned a number of sects and denominations that espouse \"No creed, but the Bible/New Testament\". This was a common reason for Anabaptist persecution from Catholic and Protestant believers. Anabaptist groups that exist today include the Amish, Hutterites, Mennonites, Schwarzenau Brethren (Church of the Brethren), River Brethren, Bruderhof, and the Apostolic Christian Church. The Seventh-day Adventist Church also shares this sentiment.",
"title": "Christianity"
},
{
"paragraph_id": 13,
"text": "The Religious Society of Friends, the group known as the Quakers, was founded in the 17th century and is similarly non-creedal. They believe that such formal structures, “be they written words, steeple-houses or a clerical hierarchy,” cannot take the place of communal relationships and a shared connection with God.",
"title": "Christianity"
},
{
"paragraph_id": 14,
"text": "Similar reservations about the use of creeds can be found in the Restoration Movement and its descendants, the Christian Church (Disciples of Christ), the Churches of Christ, and the Christian churches and churches of Christ. Restorationists profess \"no creed but Christ\".",
"title": "Christianity"
},
{
"paragraph_id": 15,
"text": "Jehovah's Witnesses contrast \"memorizing or repeating creeds\" with acting to \"do what Jesus said\".",
"title": "Christianity"
},
{
"paragraph_id": 16,
"text": "Several creeds originated in Christianity.",
"title": "Christianity"
},
{
"paragraph_id": 17,
"text": "Protestant denominations are usually associated with confessions of faith, which are similar to creeds but usually longer.",
"title": "Christianity"
},
{
"paragraph_id": 18,
"text": "Within the sects of the Latter Day Saint movement, the Articles of Faith are contained in a list which was composed by Joseph Smith as part of an 1842 letter which he sent to \"Long\" John Wentworth, editor of the Chicago Democrat. It is canonized along with the King James Version of the Bible, the Book of Mormon, the Doctrine & Covenants and the Pearl of Great Price, as a part of the standard works of the Church of Jesus Christ of Latter-day Saints.",
"title": "Christianity"
},
{
"paragraph_id": 19,
"text": "In the Swiss Reformed Churches, there was a quarrel about the Apostles' Creed in the mid-19th century. As a result, most cantonal reformed churches stopped prescribing any particular creed.",
"title": "Christianity"
},
{
"paragraph_id": 20,
"text": "In 2005, Bishop John Shelby Spong, retired Episcopal Bishop of Newark, has written that dogmas and creeds were merely \"a stage in our development\" and \"part of our religious childhood.\" In his book, Sins of the Scripture, Spong wrote that \"Jesus seemed to understand that no one can finally fit the holy God into his or her creeds or doctrines. That is idolatry.\"",
"title": "Christianity"
},
{
"paragraph_id": 21,
"text": "In Islamic theology, the term most closely corresponding to \"creed\" is ʿaqīdah (عقيدة). The first such creed was written as \"a short answer to the pressing heresies of the time\" is known as Al-Fiqh Al-Akbar and ascribed to Abū Ḥanīfa. Two well known creeds were the Fiqh Akbar II \"representative\" of the al-Ash'ari, and Fiqh Akbar III, \"representative\" of the Ash-Shafi'i.",
"title": "Similar concepts in other religions"
},
{
"paragraph_id": 22,
"text": "Iman (Arabic: الإيمان) in Islamic theology denotes a believer's religious faith. Its most simple definition is the belief in the six articles of faith, known as arkān al-īmān.",
"title": "Similar concepts in other religions"
},
{
"paragraph_id": 23,
"text": "Rabbi Milton Steinberg wrote that \"By its nature Judaism is averse to formal creeds which of necessity limit and restrain thought\" and asserted in his book Basic Judaism (1947) that \"Judaism has never arrived at a creed.\" The 1976 Centenary Platform of the Central Conference of American Rabbis, an organization of Reform rabbis, agrees that \"Judaism emphasizes action rather than creed as the primary expression of a religious life.\"",
"title": "Similar concepts in other religions"
},
{
"paragraph_id": 24,
"text": "Some characterize the Shema Yisrael as a creedal statement in strict monotheism embodied by prayer: \"Hear O Israel, the Lord is our God, the Lord is One\" (Hebrew: שמע ישראל אדני אלהינו אדני אחד; transliterated Shema Yisrael Adonai Eloheinu Adonai Echad).",
"title": "Similar concepts in other religions"
},
{
"paragraph_id": 25,
"text": "A notable statement of Jewish principles of faith was drawn up by Maimonides as his 13 Principles of Faith.",
"title": "Similar concepts in other religions"
},
{
"paragraph_id": 26,
"text": "Following a debate that lasted more than twenty years, the National Conference of the American Unitarian Association passed a resolution in 1894 that established the denomination as non-creedal. The Unitarians later merged with the Universalist Church of America to form the Unitarian Universalist Association (UUA). Instead of a creed, the UUA abides by a set of principles, such as “a free and responsible search for truth and meaning”. It cites diverse sources of inspiration, including Christianity, Judaism, Humanism, and Earth-centered traditions.",
"title": "Religions without creeds"
}
] | A creed, also known as a confession of faith, a symbol, or a statement of faith, is a statement of the shared beliefs of a community in a form which is structured by subjects which summarize its core tenets. | 2001-11-22T16:07:14Z | 2023-11-14T21:41:22Z | [
"Template:Webarchive",
"Template:HDS",
"Template:Short description",
"Template:About",
"Template:ISBN",
"Template:Wikiquote",
"Template:Christianity footer",
"Template:Redirect-distinguish",
"Template:See also",
"Template:Lang-ar",
"Template:Lang-he",
"Template:Cite web",
"Template:Blockquote",
"Template:Main",
"Template:Fact",
"Template:Who",
"Template:Christianity",
"Template:Reflist",
"Template:Cite book",
"Template:EB1911 poster",
"Template:Redirect",
"Template:Lang",
"Template:Authority control"
] | https://en.wikipedia.org/wiki/Creed |
7,213 | Claudius Aelianus | Claudius Aelianus (Ancient Greek: Κλαύδιος Αἰλιανός, Greek transliteration Kláudios Ailianós; c. 175 – c. 235 AD), commonly Aelian (/ˈiːliən/), born at Praeneste, was a Roman author and teacher of rhetoric who flourished under Septimius Severus and probably outlived Elagabalus, who died in 222. He spoke Greek so fluently that he was called "honey-tongued" (μελίγλωσσος meliglossos); Roman-born, he preferred Greek authors, and wrote in a slightly archaizing Greek himself.
His two chief works are valuable for the numerous quotations from the works of earlier authors, which are otherwise lost, and for the surprising lore, which offers unexpected glimpses into the Greco-Roman world-view. It is also the only Greco-Roman work to mention Gilgamesh.
On the Nature of Animals (alternatively "On the Characteristics of Animals"; Ancient Greek: Περὶ ζῴων ἰδιότητος, Perì zṓōn idiótētos; usually cited by its Latin title De Natura Animalium) is a collection, in seventeen books, of brief stories of natural history. Some are included for the moral lessons they convey; others because they are astonishing.
The Beaver is an amphibious creature: by day it lives hidden in rivers, but at night it roams the land, feeding itself with anything that it can find. Now it understands the reason why hunters come after it with such eagerness and impetuosity, and it puts down its head and with its teeth cuts off its testicles and throws them in their path, as a prudent man who, falling into the hands of robbers, sacrifices all that he is carrying, to save his life, and forfeits his possessions by way of ransom. If however it has already saved its life by self-castration and is again pursued, then it stands up and reveals that it offers no ground for their eager pursuit, and releases the hunters from all further exertions, for they esteem its flesh less. Often however Beavers with testicles intact, after escaping as far away as possible, have drawn in the coveted part, and with great skill and ingenuity tricked their pursuers, pretending that they no longer possessed what they were keeping in concealment.
The Loeb Classical Library introduction characterizes the book as "an appealing collection of facts and fables about the animal kingdom that invites the reader to ponder contrasts between human and animal behavior".
Aelian's anecdotes on animals rarely depend on direct observation: they are almost entirely taken from written sources, not only Pliny the Elder, Theopompus, and Lycus of Rhegium, but also other authors and works now lost, to whom he is thus a valuable witness. He is more attentive to marine life than might be expected, though, and this seems to reflect first-hand personal interest; he often quotes "fishermen". At times he strikes the modern reader as thoroughly credulous, but at others he specifically states that he is merely reporting what is told by others, and even that he does not believe them. Aelian's work is one of the sources of medieval natural history and of the bestiaries of the Middle Ages.
The surviving portions of the text are badly mangled and garbled and replete with later interpolations. Conrad Gessner (or Gesner), the Swiss scientist and natural historian of the Renaissance, made a Latin translation of Aelian's work, to give it a wider European audience. An English translation by A. F. Scholfield has been published in the Loeb Classical Library, 3 vols. (1958-59).
Various History (Ποικίλη ἱστορία, Poikílē historía)—for the most part preserved only in an abridged form—is Aelian's other well-known work, a miscellany of anecdotes and biographical sketches, lists, pithy maxims, and descriptions of natural wonders and strange local customs, in 14 books, with many surprises for the cultural historian and the mythographer, anecdotes about the famous Greek philosophers, poets, historians, and playwrights and myths instructively retold. The emphasis is on various moralizing tales about heroes and rulers, athletes and wise men; reports about food and drink, different styles in dress or lovers, local habits in giving gifts or entertainments, or in religious beliefs and death customs; and comments on Greek painting. Aelian gives accounts of, among other things, fly fishing using lures of red wool and feathers, lacquerwork, and serpent worship. Essentially, the Various History is a classical "magazine" in the original sense of that word. He is not perfectly trustworthy in details, and his writing was heavily influenced by Stoic opinions, perhaps so that his readers will not feel guilty, but Jane Ellen Harrison found survivals of archaic rites mentioned by Aelian very illuminating in her Prolegomena to the Study of Greek Religion (1903, 1922).
Varia Historia was first printed in 1545. The standard modern text is that of Mervin R. Dilts (1974).
Two English translations of the Various History, by Fleming (1576) and Stanley (1665) made Aelian's miscellany available to English readers, but after 1665 no English translation appeared, until three English translations appeared almost simultaneously: James G. DeVoto, Claudius Aelianus: Ποικίλης Ἱστορίας (Varia Historia) Chicago, 1995; Diane Ostrom Johnson, An English Translation of Claudius Aelianus' "Varia Historia", 1997; and N. G. Wilson, Aelian: Historical Miscellany in the Loeb Classical Library.
Considerable fragments of two other works, On Providence and Divine Manifestations, are preserved in the early medieval encyclopedia, the Suda. Twenty "letters from a farmer" after the manner of Alciphron are also attributed to him. The letters are invented compositions to a fictitious correspondent, which are a device for vignettes of agricultural and rural life, set in Attica, though mellifluous Aelian once boasted that he had never been outside Italy, never been aboard a ship (which is at variance, though, with his own statement, de Natura Animalium XI.40, that he had seen the bull Serapis with his own eyes). Thus conclusions about actual agriculture in the Letters are as likely to evoke Latium as Attica. The fragments have been edited in 1998 by D. Domingo-Foraste, but are not available in English. The Letters are available in the Loeb Classical Library, translated by Allen Rogers Benner and Francis H. Fobes (1949). | [
{
"paragraph_id": 0,
"text": "Claudius Aelianus (Ancient Greek: Κλαύδιος Αἰλιανός, Greek transliteration Kláudios Ailianós; c. 175 – c. 235 AD), commonly Aelian (/ˈiːliən/), born at Praeneste, was a Roman author and teacher of rhetoric who flourished under Septimius Severus and probably outlived Elagabalus, who died in 222. He spoke Greek so fluently that he was called \"honey-tongued\" (μελίγλωσσος meliglossos); Roman-born, he preferred Greek authors, and wrote in a slightly archaizing Greek himself.",
"title": ""
},
{
"paragraph_id": 1,
"text": "His two chief works are valuable for the numerous quotations from the works of earlier authors, which are otherwise lost, and for the surprising lore, which offers unexpected glimpses into the Greco-Roman world-view. It is also the only Greco-Roman work to mention Gilgamesh.",
"title": ""
},
{
"paragraph_id": 2,
"text": "On the Nature of Animals (alternatively \"On the Characteristics of Animals\"; Ancient Greek: Περὶ ζῴων ἰδιότητος, Perì zṓōn idiótētos; usually cited by its Latin title De Natura Animalium) is a collection, in seventeen books, of brief stories of natural history. Some are included for the moral lessons they convey; others because they are astonishing.",
"title": "De Natura Animalium"
},
{
"paragraph_id": 3,
"text": "The Beaver is an amphibious creature: by day it lives hidden in rivers, but at night it roams the land, feeding itself with anything that it can find. Now it understands the reason why hunters come after it with such eagerness and impetuosity, and it puts down its head and with its teeth cuts off its testicles and throws them in their path, as a prudent man who, falling into the hands of robbers, sacrifices all that he is carrying, to save his life, and forfeits his possessions by way of ransom. If however it has already saved its life by self-castration and is again pursued, then it stands up and reveals that it offers no ground for their eager pursuit, and releases the hunters from all further exertions, for they esteem its flesh less. Often however Beavers with testicles intact, after escaping as far away as possible, have drawn in the coveted part, and with great skill and ingenuity tricked their pursuers, pretending that they no longer possessed what they were keeping in concealment.",
"title": "De Natura Animalium"
},
{
"paragraph_id": 4,
"text": "The Loeb Classical Library introduction characterizes the book as \"an appealing collection of facts and fables about the animal kingdom that invites the reader to ponder contrasts between human and animal behavior\".",
"title": "De Natura Animalium"
},
{
"paragraph_id": 5,
"text": "Aelian's anecdotes on animals rarely depend on direct observation: they are almost entirely taken from written sources, not only Pliny the Elder, Theopompus, and Lycus of Rhegium, but also other authors and works now lost, to whom he is thus a valuable witness. He is more attentive to marine life than might be expected, though, and this seems to reflect first-hand personal interest; he often quotes \"fishermen\". At times he strikes the modern reader as thoroughly credulous, but at others he specifically states that he is merely reporting what is told by others, and even that he does not believe them. Aelian's work is one of the sources of medieval natural history and of the bestiaries of the Middle Ages.",
"title": "De Natura Animalium"
},
{
"paragraph_id": 6,
"text": "The surviving portions of the text are badly mangled and garbled and replete with later interpolations. Conrad Gessner (or Gesner), the Swiss scientist and natural historian of the Renaissance, made a Latin translation of Aelian's work, to give it a wider European audience. An English translation by A. F. Scholfield has been published in the Loeb Classical Library, 3 vols. (1958-59).",
"title": "De Natura Animalium"
},
{
"paragraph_id": 7,
"text": "Various History (Ποικίλη ἱστορία, Poikílē historía)—for the most part preserved only in an abridged form—is Aelian's other well-known work, a miscellany of anecdotes and biographical sketches, lists, pithy maxims, and descriptions of natural wonders and strange local customs, in 14 books, with many surprises for the cultural historian and the mythographer, anecdotes about the famous Greek philosophers, poets, historians, and playwrights and myths instructively retold. The emphasis is on various moralizing tales about heroes and rulers, athletes and wise men; reports about food and drink, different styles in dress or lovers, local habits in giving gifts or entertainments, or in religious beliefs and death customs; and comments on Greek painting. Aelian gives accounts of, among other things, fly fishing using lures of red wool and feathers, lacquerwork, and serpent worship. Essentially, the Various History is a classical \"magazine\" in the original sense of that word. He is not perfectly trustworthy in details, and his writing was heavily influenced by Stoic opinions, perhaps so that his readers will not feel guilty, but Jane Ellen Harrison found survivals of archaic rites mentioned by Aelian very illuminating in her Prolegomena to the Study of Greek Religion (1903, 1922).",
"title": "Varia Historia"
},
{
"paragraph_id": 8,
"text": "Varia Historia was first printed in 1545. The standard modern text is that of Mervin R. Dilts (1974).",
"title": "Varia Historia"
},
{
"paragraph_id": 9,
"text": "Two English translations of the Various History, by Fleming (1576) and Stanley (1665) made Aelian's miscellany available to English readers, but after 1665 no English translation appeared, until three English translations appeared almost simultaneously: James G. DeVoto, Claudius Aelianus: Ποικίλης Ἱστορίας (Varia Historia) Chicago, 1995; Diane Ostrom Johnson, An English Translation of Claudius Aelianus' \"Varia Historia\", 1997; and N. G. Wilson, Aelian: Historical Miscellany in the Loeb Classical Library.",
"title": "Varia Historia"
},
{
"paragraph_id": 10,
"text": "Considerable fragments of two other works, On Providence and Divine Manifestations, are preserved in the early medieval encyclopedia, the Suda. Twenty \"letters from a farmer\" after the manner of Alciphron are also attributed to him. The letters are invented compositions to a fictitious correspondent, which are a device for vignettes of agricultural and rural life, set in Attica, though mellifluous Aelian once boasted that he had never been outside Italy, never been aboard a ship (which is at variance, though, with his own statement, de Natura Animalium XI.40, that he had seen the bull Serapis with his own eyes). Thus conclusions about actual agriculture in the Letters are as likely to evoke Latium as Attica. The fragments have been edited in 1998 by D. Domingo-Foraste, but are not available in English. The Letters are available in the Loeb Classical Library, translated by Allen Rogers Benner and Francis H. Fobes (1949).",
"title": "Other works"
}
] | Claudius Aelianus, commonly Aelian, born at Praeneste, was a Roman author and teacher of rhetoric who flourished under Septimius Severus and probably outlived Elagabalus, who died in 222. He spoke Greek so fluently that he was called "honey-tongued"; Roman-born, he preferred Greek authors, and wrote in a slightly archaizing Greek himself. His two chief works are valuable for the numerous quotations from the works of earlier authors, which are otherwise lost, and for the surprising lore, which offers unexpected glimpses into the Greco-Roman world-view. It is also the only Greco-Roman work to mention Gilgamesh. | 2001-11-22T22:47:03Z | 2023-10-17T14:39:51Z | [
"Template:Blockquote",
"Template:Cite book",
"Template:Natural history",
"Template:Circa",
"Template:Citation needed",
"Template:Reflist",
"Template:EB1911",
"Template:Short description",
"Template:IPAc-en",
"Template:Lang",
"Template:Explain",
"Template:ISBN",
"Template:Wikisource author",
"Template:Lang-grc",
"Template:Internet Archive author",
"Template:Librivox author",
"Template:Authority control",
"Template:According to whom"
] | https://en.wikipedia.org/wiki/Claudius_Aelianus |
7,214 | Callisto (mythology) | In Greek mythology, Callisto (/kəˈlɪstoʊ/; Ancient Greek: Καλλιστώ Greek pronunciation: [kallistɔ̌ː]) was a nymph, or the daughter of King Lycaon; the myth varies in such details. She was believed to be one of the followers of Artemis (Diana for the Romans) who attracted Zeus. Many versions of Callisto's story survive. According to some writers, Zeus transformed himself into the figure of Artemis to pursue Callisto, and she slept with him believing Zeus to be Artemis. She became pregnant and when this was eventually discovered, she was expelled from Artemis's group, after which a furious Hera, the wife of Zeus, transformed her into a bear, although in some versions Artemis is the one to give her an ursine form. Later, just as she was about to be killed by her son when he was hunting, she was set among the stars as Ursa Major ("the Great Bear") by Zeus. She was the bear-mother of the Arcadians, through her son Arcas by Zeus.
The fourth Galilean moon of Jupiter and a main belt asteroid are named after Callisto.
As a follower of Artemis, Callisto, who Hesiod said was the daughter of Lycaon, king of Arcadia, took a vow to remain a virgin, as did all the nymphs of Artemis.
According to Hesiod, she was seduced by Zeus, and of the consequences that followed:
[Callisto] chose to occupy herself with wild-beasts in the mountains together with Artemis, and, when she was seduced by Zeus, continued some time undetected by the goddess, but afterwards, when she was already with child, was seen by her bathing and so discovered. Upon this, the goddess was enraged and changed her into a beast. Thus she became a bear and gave birth to a son called Arcas. But while she was in the mountains, she was hunted by some goat-herds and given up with her babe to Lycaon. Some while after, she thought fit to go into the forbidden precinct of Zeus, not knowing the law, and being pursued by her own son and the Arcadians, was about to be killed because of the said law; but Zeus delivered her because of her connection with him and put her among the stars, giving her the name Bear because of the misfortune which had befallen her.
Eratosthenes also mentions a variation in which the virginal companion of Artemis that was seduced by Zeus and eventually transformed into the constellation Ursa Minor was named Phoenice instead.
According to Ovid, it was Jupiter who took the form of Diana so that he might evade his wife Juno's detection, forcing himself upon Callisto while she was separated from Diana and the other nymphs. Callisto recognized that something was wrong the moment Jupiter started giving her "non-virginal kisses", but by that point it was too late, and even though she fought him off, he overpowered her. The real Diana arrived in the scene soon after and called Callisto to her, only for the girl to run away in fear she was Jupiter, until she noticed the nymphs accompanying the goddess. Callisto's subsequent pregnancy was discovered several months later while she was bathing with Diana and her fellow nymphs. Diana became enraged when she saw that Callisto was pregnant and expelled her from the group. Callisto later gave birth to Arcas. Juno then took the opportunity to avenge her wounded pride and transformed the nymph into a bear. Sixteen years later Callisto, still a bear, encountered her son Arcas hunting in the forest. Just as Arcas was about to kill his own mother with his javelin, Jupiter averted the tragedy by placing mother and son amongst the stars as Ursa Major and Minor, respectively. Juno, enraged that her attempt at revenge had been frustrated, appealed to Tethys that the two might never meet her waters, thus providing a poetic explanation for the constellations' circumpolar positions in ancient times.
According to Hyginus, the origin of the transformation of Zeus, with its lesbian overtones, was from a rendition of the tale in a comedy in a lost work by the Attic comedian Amphis where Zeus embraced Callisto as Artemis and she, after being questioned by Artemis for her pregnancy, blamed the goddess, thinking she had impregnated her; Artemis then changed her into a bear. She was caught by some Aetolians and brought to Lycaon, her father. Still a bear, she rushed with her son Arcas into a temple of Zeus as the Arcadians followed to kill them; Zeus turned mother and son into constellations. Hyginus also records a version where Hera changed Callisto for sleeping with Zeus, and Artemis later slew her while hunting, not recognizing her. In another of the versions Hyginus records, it was Zeus who turned Callisto into a bear, to conceal her from Juno, who had noticed what her husband was doing. Juno then pointed Callisto to Diana, who proceeded to shoot her with her arrows.
According to the mythographer Apollodorus, Zeus forced himself on Callisto when he disguised himself as Artemis or Apollo, in order to lure the sworn maiden into his embrace. Apollodorus is the only author to mention Apollo, but implies that it is not a rarity. Callisto was then turned into a bear by Zeus trying to hide her from Hera, but Hera asked Artemis to shoot the animal, and Artemis complied. Alternatively, Artemis killed Callisto for not protecting her virginity. Nonnus also writes that a "female paramour entered a woman's bed."
Either Artemis "slew Kallisto with a shot of her silver bow," according to Homer, in order to please Juno (Hera) as Pausanias and Pseudo-Apollodorus write or later Arcas, the eponym of Arcadia, nearly killed his bear-mother, when she had wandered into the forbidden precinct of Zeus. In every case, Zeus placed them both in the sky as the constellations Ursa Major, called Arktos (ἄρκτος), the Bear, by Greeks, and Ursa Minor.
According to John Tzetzes, Charon of Lampsacus wrote that Callisto's son Arcas had been fathered not by Zeus but rather by Apollo.
As a constellation, Ursa Major (who was also known as Helice, from an alternative origin story of the constellation) told Demeter, when the goddess asked the stars whether they knew anything about her daughter Persephone's abduction, to ask Helios the sun god, for he knew the deeds of the day well, while the night was blameless.
The name Kalliste (Καλλίστη), "most beautiful", may be recognized as an epithet of the goddess herself, though none of the inscriptions at Athens that record priests of Artemis Kalliste (Ἄρτεμις Καλλίστη), date before the third century BCE. Artemis Kalliste was worshiped in Athens in a shrine which lay outside the Dipylon gate, by the side of the road to the Academy. W. S. Ferguson suggested that Artemis Soteira and Artemis Kalliste were joined in a common cult administered by a single priest. The bearlike character of Artemis herself was a feature of the Brauronia. It has been suggested that the myths of Artemis' nymphs breaking their vows were originally about Artemis herself, before her characterization shifted to that of a sworn virgin who fiercely defends her chastity.
The myth in Catasterismi may be derived from the fact that a set of constellations appear close together in the sky, in and near the Zodiac sign of Libra, namely Ursa Minor, Ursa Major, Boötes, and Virgo. The constellation Boötes, was explicitly identified in the Hesiodic Astronomia (Ἀστρονομία) as Arcas, the "Bear-warden" (Arktophylax; Ἀρκτοφύλαξ): He is Arkas the son of Kallisto and Zeus, and he lived in the country about Lykaion. After Zeus had seduced Kallisto, Lykaon, pretending not to know of the matter, entertained Zeus, as Hesiod says, and set before him on the table the babe [Arkas] which he had cut up.
The stars of Ursa Major were all circumpolar in Athens of 400 BCE, and all but the stars in the Great Bear's left foot were circumpolar in Ovid's Rome, in the first century CE. Now, however, due to the precession of the equinoxes, the feet of the Great Bear constellation do sink below the horizon from Rome and especially from Athens; however, Ursa Minor (Arcas) does remain completely above the horizon, even from latitudes as far south as Honolulu and Hong Kong.
According to Julien d'Huy, who used phylogenetic and statistical tools, the story could be a recent transformation of a Palaeolithic myth.
Callisto's story was sometimes depicted in classical art, where the moment of transformation into a bear was the most popular. From the Renaissance on a series of major history paintings as well as many smaller cabinet paintings and book illustrations, usually called "Diana and Callisto", depicted the traumatic moment of discovery of the pregnancy, as the goddess and her nymphs bathed in a pool, following Ovid's account. The subject's attraction was undoubtedly mainly the opportunity it offered for a group of several females to be shown largely nude.
Titian's Diana and Callisto (1556-1559), was the greatest (though not the first) of these, quickly disseminated by a print by Cornelius Cort. Here, as in most subsequent depictions, Diana points angrily, as Callisto is held by two nymphs, who may be pulling off what little clothing remains on her. Other versions include one by Rubens, and Diana Bathing with her Nymphs with Actaeon and Callisto by Rembrandt, which unusually combines the moment with the arrival of Actaeon. The basic composition is rather unusually consistent. Carlo Ridolfi said there was a version by Giorgione, who died in 1510, though his many attributions to Giorgione of paintings that are now lost are treated with suspicion by scholars. Other, less dramatic, treatments before Titian established his composition are by Palma Vecchio and Dosso Dossi. Annibale Carracci's The Loves of the Gods includes an image of Juno urging Diana to shoot Callisto in ursine form.
Although Ovid places the discovery in the ninth month of Callisto's pregnancy, in paintings she is generally shown with a rather modest bump for late pregnancy. With the Visitation in religious art, this was the leading recurring subject in history painting that required showing pregnancy in art, which Early Modern painters still approached with some caution. In any case, the narrative required that the rest of the group had not previously noticed the pregnancy. Callisto being seduced by Zeus/Jupiter in disguise was also a popular subject, usually called "Jupiter and Callisto"; it was the clearest common subject with lesbian lovers from classical mythology. The two lovers are usually shown happily embracing in a bower. The violent rape described by Ovid as following Callisto's realization of what is going on is rarely shown. In versions before about 1700 Callisto may show some doubt about what is going on, as in the versions by Rubens. It was especially popular in the 18th century, when depictions were increasingly erotic; François Boucher painted several versions.
During the Nazi occupation of France, resistance poet Robert Desnos wrote a collection of poems entitled Calixto suivi de contrée, where he used the myth of Callisto as a symbol for beauty imprisoned beneath ugliness: a metaphor for France under the German occupation.
Aeschylus' tragedy Callisto is lost. | [
{
"paragraph_id": 0,
"text": "In Greek mythology, Callisto (/kəˈlɪstoʊ/; Ancient Greek: Καλλιστώ Greek pronunciation: [kallistɔ̌ː]) was a nymph, or the daughter of King Lycaon; the myth varies in such details. She was believed to be one of the followers of Artemis (Diana for the Romans) who attracted Zeus. Many versions of Callisto's story survive. According to some writers, Zeus transformed himself into the figure of Artemis to pursue Callisto, and she slept with him believing Zeus to be Artemis. She became pregnant and when this was eventually discovered, she was expelled from Artemis's group, after which a furious Hera, the wife of Zeus, transformed her into a bear, although in some versions Artemis is the one to give her an ursine form. Later, just as she was about to be killed by her son when he was hunting, she was set among the stars as Ursa Major (\"the Great Bear\") by Zeus. She was the bear-mother of the Arcadians, through her son Arcas by Zeus.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The fourth Galilean moon of Jupiter and a main belt asteroid are named after Callisto.",
"title": ""
},
{
"paragraph_id": 2,
"text": "As a follower of Artemis, Callisto, who Hesiod said was the daughter of Lycaon, king of Arcadia, took a vow to remain a virgin, as did all the nymphs of Artemis.",
"title": "Mythology"
},
{
"paragraph_id": 3,
"text": "According to Hesiod, she was seduced by Zeus, and of the consequences that followed:",
"title": "Mythology"
},
{
"paragraph_id": 4,
"text": "[Callisto] chose to occupy herself with wild-beasts in the mountains together with Artemis, and, when she was seduced by Zeus, continued some time undetected by the goddess, but afterwards, when she was already with child, was seen by her bathing and so discovered. Upon this, the goddess was enraged and changed her into a beast. Thus she became a bear and gave birth to a son called Arcas. But while she was in the mountains, she was hunted by some goat-herds and given up with her babe to Lycaon. Some while after, she thought fit to go into the forbidden precinct of Zeus, not knowing the law, and being pursued by her own son and the Arcadians, was about to be killed because of the said law; but Zeus delivered her because of her connection with him and put her among the stars, giving her the name Bear because of the misfortune which had befallen her.",
"title": "Mythology"
},
{
"paragraph_id": 5,
"text": "Eratosthenes also mentions a variation in which the virginal companion of Artemis that was seduced by Zeus and eventually transformed into the constellation Ursa Minor was named Phoenice instead.",
"title": "Mythology"
},
{
"paragraph_id": 6,
"text": "According to Ovid, it was Jupiter who took the form of Diana so that he might evade his wife Juno's detection, forcing himself upon Callisto while she was separated from Diana and the other nymphs. Callisto recognized that something was wrong the moment Jupiter started giving her \"non-virginal kisses\", but by that point it was too late, and even though she fought him off, he overpowered her. The real Diana arrived in the scene soon after and called Callisto to her, only for the girl to run away in fear she was Jupiter, until she noticed the nymphs accompanying the goddess. Callisto's subsequent pregnancy was discovered several months later while she was bathing with Diana and her fellow nymphs. Diana became enraged when she saw that Callisto was pregnant and expelled her from the group. Callisto later gave birth to Arcas. Juno then took the opportunity to avenge her wounded pride and transformed the nymph into a bear. Sixteen years later Callisto, still a bear, encountered her son Arcas hunting in the forest. Just as Arcas was about to kill his own mother with his javelin, Jupiter averted the tragedy by placing mother and son amongst the stars as Ursa Major and Minor, respectively. Juno, enraged that her attempt at revenge had been frustrated, appealed to Tethys that the two might never meet her waters, thus providing a poetic explanation for the constellations' circumpolar positions in ancient times.",
"title": "Mythology"
},
{
"paragraph_id": 7,
"text": "According to Hyginus, the origin of the transformation of Zeus, with its lesbian overtones, was from a rendition of the tale in a comedy in a lost work by the Attic comedian Amphis where Zeus embraced Callisto as Artemis and she, after being questioned by Artemis for her pregnancy, blamed the goddess, thinking she had impregnated her; Artemis then changed her into a bear. She was caught by some Aetolians and brought to Lycaon, her father. Still a bear, she rushed with her son Arcas into a temple of Zeus as the Arcadians followed to kill them; Zeus turned mother and son into constellations. Hyginus also records a version where Hera changed Callisto for sleeping with Zeus, and Artemis later slew her while hunting, not recognizing her. In another of the versions Hyginus records, it was Zeus who turned Callisto into a bear, to conceal her from Juno, who had noticed what her husband was doing. Juno then pointed Callisto to Diana, who proceeded to shoot her with her arrows.",
"title": "Mythology"
},
{
"paragraph_id": 8,
"text": "According to the mythographer Apollodorus, Zeus forced himself on Callisto when he disguised himself as Artemis or Apollo, in order to lure the sworn maiden into his embrace. Apollodorus is the only author to mention Apollo, but implies that it is not a rarity. Callisto was then turned into a bear by Zeus trying to hide her from Hera, but Hera asked Artemis to shoot the animal, and Artemis complied. Alternatively, Artemis killed Callisto for not protecting her virginity. Nonnus also writes that a \"female paramour entered a woman's bed.\"",
"title": "Mythology"
},
{
"paragraph_id": 9,
"text": "Either Artemis \"slew Kallisto with a shot of her silver bow,\" according to Homer, in order to please Juno (Hera) as Pausanias and Pseudo-Apollodorus write or later Arcas, the eponym of Arcadia, nearly killed his bear-mother, when she had wandered into the forbidden precinct of Zeus. In every case, Zeus placed them both in the sky as the constellations Ursa Major, called Arktos (ἄρκτος), the Bear, by Greeks, and Ursa Minor.",
"title": "Mythology"
},
{
"paragraph_id": 10,
"text": "According to John Tzetzes, Charon of Lampsacus wrote that Callisto's son Arcas had been fathered not by Zeus but rather by Apollo.",
"title": "Mythology"
},
{
"paragraph_id": 11,
"text": "As a constellation, Ursa Major (who was also known as Helice, from an alternative origin story of the constellation) told Demeter, when the goddess asked the stars whether they knew anything about her daughter Persephone's abduction, to ask Helios the sun god, for he knew the deeds of the day well, while the night was blameless.",
"title": "Mythology"
},
{
"paragraph_id": 12,
"text": "The name Kalliste (Καλλίστη), \"most beautiful\", may be recognized as an epithet of the goddess herself, though none of the inscriptions at Athens that record priests of Artemis Kalliste (Ἄρτεμις Καλλίστη), date before the third century BCE. Artemis Kalliste was worshiped in Athens in a shrine which lay outside the Dipylon gate, by the side of the road to the Academy. W. S. Ferguson suggested that Artemis Soteira and Artemis Kalliste were joined in a common cult administered by a single priest. The bearlike character of Artemis herself was a feature of the Brauronia. It has been suggested that the myths of Artemis' nymphs breaking their vows were originally about Artemis herself, before her characterization shifted to that of a sworn virgin who fiercely defends her chastity.",
"title": "Origin of the myth"
},
{
"paragraph_id": 13,
"text": "The myth in Catasterismi may be derived from the fact that a set of constellations appear close together in the sky, in and near the Zodiac sign of Libra, namely Ursa Minor, Ursa Major, Boötes, and Virgo. The constellation Boötes, was explicitly identified in the Hesiodic Astronomia (Ἀστρονομία) as Arcas, the \"Bear-warden\" (Arktophylax; Ἀρκτοφύλαξ): He is Arkas the son of Kallisto and Zeus, and he lived in the country about Lykaion. After Zeus had seduced Kallisto, Lykaon, pretending not to know of the matter, entertained Zeus, as Hesiod says, and set before him on the table the babe [Arkas] which he had cut up.",
"title": "Origin of the myth"
},
{
"paragraph_id": 14,
"text": "The stars of Ursa Major were all circumpolar in Athens of 400 BCE, and all but the stars in the Great Bear's left foot were circumpolar in Ovid's Rome, in the first century CE. Now, however, due to the precession of the equinoxes, the feet of the Great Bear constellation do sink below the horizon from Rome and especially from Athens; however, Ursa Minor (Arcas) does remain completely above the horizon, even from latitudes as far south as Honolulu and Hong Kong.",
"title": "Origin of the myth"
},
{
"paragraph_id": 15,
"text": "According to Julien d'Huy, who used phylogenetic and statistical tools, the story could be a recent transformation of a Palaeolithic myth.",
"title": "Origin of the myth"
},
{
"paragraph_id": 16,
"text": "Callisto's story was sometimes depicted in classical art, where the moment of transformation into a bear was the most popular. From the Renaissance on a series of major history paintings as well as many smaller cabinet paintings and book illustrations, usually called \"Diana and Callisto\", depicted the traumatic moment of discovery of the pregnancy, as the goddess and her nymphs bathed in a pool, following Ovid's account. The subject's attraction was undoubtedly mainly the opportunity it offered for a group of several females to be shown largely nude.",
"title": "In art"
},
{
"paragraph_id": 17,
"text": "Titian's Diana and Callisto (1556-1559), was the greatest (though not the first) of these, quickly disseminated by a print by Cornelius Cort. Here, as in most subsequent depictions, Diana points angrily, as Callisto is held by two nymphs, who may be pulling off what little clothing remains on her. Other versions include one by Rubens, and Diana Bathing with her Nymphs with Actaeon and Callisto by Rembrandt, which unusually combines the moment with the arrival of Actaeon. The basic composition is rather unusually consistent. Carlo Ridolfi said there was a version by Giorgione, who died in 1510, though his many attributions to Giorgione of paintings that are now lost are treated with suspicion by scholars. Other, less dramatic, treatments before Titian established his composition are by Palma Vecchio and Dosso Dossi. Annibale Carracci's The Loves of the Gods includes an image of Juno urging Diana to shoot Callisto in ursine form.",
"title": "In art"
},
{
"paragraph_id": 18,
"text": "Although Ovid places the discovery in the ninth month of Callisto's pregnancy, in paintings she is generally shown with a rather modest bump for late pregnancy. With the Visitation in religious art, this was the leading recurring subject in history painting that required showing pregnancy in art, which Early Modern painters still approached with some caution. In any case, the narrative required that the rest of the group had not previously noticed the pregnancy. Callisto being seduced by Zeus/Jupiter in disguise was also a popular subject, usually called \"Jupiter and Callisto\"; it was the clearest common subject with lesbian lovers from classical mythology. The two lovers are usually shown happily embracing in a bower. The violent rape described by Ovid as following Callisto's realization of what is going on is rarely shown. In versions before about 1700 Callisto may show some doubt about what is going on, as in the versions by Rubens. It was especially popular in the 18th century, when depictions were increasingly erotic; François Boucher painted several versions.",
"title": "In art"
},
{
"paragraph_id": 19,
"text": "During the Nazi occupation of France, resistance poet Robert Desnos wrote a collection of poems entitled Calixto suivi de contrée, where he used the myth of Callisto as a symbol for beauty imprisoned beneath ugliness: a metaphor for France under the German occupation.",
"title": "In art"
},
{
"paragraph_id": 20,
"text": "Aeschylus' tragedy Callisto is lost.",
"title": "In art"
}
] | In Greek mythology, Callisto was a nymph, or the daughter of King Lycaon; the myth varies in such details. She was believed to be one of the followers of Artemis who attracted Zeus. Many versions of Callisto's story survive. According to some writers, Zeus transformed himself into the figure of Artemis to pursue Callisto, and she slept with him believing Zeus to be Artemis. She became pregnant and when this was eventually discovered, she was expelled from Artemis's group, after which a furious Hera, the wife of Zeus, transformed her into a bear, although in some versions Artemis is the one to give her an ursine form. Later, just as she was about to be killed by her son when he was hunting, she was set among the stars as Ursa Major by Zeus. She was the bear-mother of the Arcadians, through her son Arcas by Zeus. The fourth Galilean moon of Jupiter and a main belt asteroid are named after Callisto. | 2001-11-23T02:35:24Z | 2023-11-24T11:18:39Z | [
"Template:Chart/start",
"Template:Chart/end",
"Template:Color box",
"Template:Metamorphoses in Greco-Roman mythology",
"Template:Short description",
"Template:IPAc-en",
"Template:Cite web",
"Template:ISBN",
"Template:Chart bottom",
"Template:Portal",
"Template:Citation needed",
"Template:Lang",
"Template:Cite book",
"Template:Commons category",
"Template:Authority control",
"Template:Lang-grc",
"Template:IPA-grc",
"Template:Chart",
"Template:Reflist",
"Template:Distinguish",
"Template:Chart top"
] | https://en.wikipedia.org/wiki/Callisto_(mythology) |
7,218 | Cookie | A cookie (American English), or a biscuit (British English), is a baked or cooked snack or dessert that is typically small, flat and sweet. It usually contains flour, sugar, egg, and some type of oil, fat, or butter. It may include other ingredients such as raisins, oats, chocolate chips, nuts, etc.
Most English-speaking countries call crunchy cookies "biscuits", except for the United States and Canada, where "biscuit" refers to a type of quick bread. Chewier biscuits are sometimes called "cookies" even in the United Kingdom. Some cookies may also be named by their shape, such as date squares or bars.
Biscuit or cookie variants include sandwich biscuits, such as custard creams, Jammie Dodgers, Bourbons and Oreos, with marshmallow or jam filling and sometimes dipped in chocolate or another sweet coating. Cookies are often served with beverages such as milk, coffee or tea and sometimes dunked, an approach which releases more flavour from confections by dissolving the sugars, while also softening their texture. Factory-made cookies are sold in grocery stores, convenience stores and vending machines. Fresh-baked cookies are sold at bakeries and coffeehouses.
In many English-speaking countries outside North America, including the United Kingdom, the most common word for a crisp cookie is "biscuit". The term "cookie" is normally used to describe chewier ones. However, in many regions both terms are used. The container used to store cookies may be called a cookie jar.
In Scotland, the term "cookie" is sometimes used to describe a plain bun.
Cookies that are baked as a solid layer on a sheet pan and then cut, rather than being baked as individual pieces, are called bar cookies in American English or traybakes in British English .
The word cookie dates from at least 1701 in Scottish usage where the word meant "plain bun", rather than thin baked good, and so it is not certain whether it is the same word. From 1808, the word "cookie" is attested "...in the sense of "small, flat, sweet cake" in American English. The American use is derived from Dutch koekje "little cake", which is a diminutive of "koek" ("cake"), which came from the Middle Dutch word "koke". Another claim is that the American name derives from the Dutch word koekje or more precisely its informal, dialect variant koekie which means little cake, and arrived in American English with the Dutch settlement of New Netherland, in the early 1600s.
According to the Scottish National Dictionary, its Scottish name may derive from the diminutive form (+ suffix -ie) of the word cook, giving the Middle Scots cookie, cooky or cu(c)kie. There was much trade and cultural contact across the North Sea between the Low Countries and Scotland during the Middle Ages, which can also be seen in the history of curling and, perhaps, golf.
Cookies are most commonly baked until crisp or else for just long enough to ensure soft interior. Other types of cookies are not baked at all, such as varieties of peanut butter cookies that use solidified chocolate rather than set eggs and wheat gluten as a binder. Cookies are produced in a wide variety of styles, using an array of ingredients including sugars, spices, chocolate, butter, peanut butter, nuts, or dried fruits.
A general theory of cookies may be formulated in the following way. Despite its descent from cakes and other sweetened breads, the cookie in almost all its forms has abandoned water as a medium for cohesion. Water in cakes serves to make the batter as thin as possible, the better to allow bubbles—responsible for a cake's fluffiness—to form. In the cookie the agent of cohesion has become some form of oil. Oils, whether in the form of butter, vegetable oils, or lard, are much more viscous than water and evaporate freely at a far higher temperature. Thus a cake made with butter or eggs in place of water is much denser after removal from the oven.
Rather than evaporating as water does in a baking cake, oils in cookies remain. These oils saturate the cavities created during baking by bubbles of escaping gases. These gases are primarily composed of steam vaporized from the egg whites and the carbon dioxide released by heating the baking powder. This saturation produces the most texturally attractive feature of the cookie, and indeed all fried foods: crispness saturated with a moisture (namely oil) that does not render soggy the food it has soaked into.
Cookie-like hard wafers have existed for as long as baking is documented, in part because they survive travel very well, but they were usually not sweet enough to be considered cookies by modern standards.
Cookies appear to have their origins in 7th century AD Persia, shortly after the use of sugar became relatively common in the region. They spread to Europe through the Muslim conquest of Spain. By the 14th century, they were common in all levels of society throughout Europe, from royal cuisine to street vendors. The first documented instance of the figure-shaped gingerbread man was at the court of Elizabeth I of England in the 16th century. She had the gingerbread figures made and presented in the likeness of some of her important guests.
With global travel becoming widespread at that time, cookies made a natural travel companion, a modernized equivalent of the travel cakes used throughout history. One of the most popular early cookies, which traveled especially well and became known on every continent by similar names, was the jumble, a relatively hard cookie made largely from nuts, sweetener, and water.
Cookies came to America through the Dutch in New Amsterdam in the late 1620s. The Dutch word "koekje" was Anglicized to "cookie" or cooky. The earliest reference to cookies in America is in 1703, when "The Dutch in New York provided...'in 1703...at a funeral 800 cookies...'"
The most common modern cookie, given its style by the creaming of butter and sugar, was not common until the 18th century. The Industrial Revolution in Britain and the consumers it created saw cookies (biscuits) become products for the masses, and firms such as Huntley & Palmers (formed in 1822), McVitie's (formed in 1830) and Carr's (formed in 1831) were all established. The decorative biscuit tin, invented by Huntley & Palmers in 1831, saw British cookies exported around the world. In 1891, Cadbury filed a patent for a chocolate-coated cookie.
Cookies are broadly classified according to how they are formed or made, including at least these categories:
Other types of cookies are classified for other reasons, such as their ingredients, size, or intended time of serving:
Leah Ettman from Nutrition Action has criticized the high calorie count and fat content of supersized cookies, which are extra large cookies; she cites the Panera Kitchen Sink Cookie, a supersized chocolate chip cookie, which measures 5 1/2 inches in diameter and has 800 calories. For busy people who eat breakfast cookies in the morning, Kate Bratskeir from the Huffington Post recommends lower-sugar cookies filled with "heart-healthy nuts and fiber-rich oats". A book on nutrition by Paul Insel et al. notes that "low-fat" or "diet cookies" may have the same number of calories as regular cookies, due to added sugar.
There are a number of slang usages of the term "cookie". The slang use of "cookie" to mean a person, "especially an attractive woman" is attested to in print since 1920. The catchphrase "that's the way the cookie crumbles", which means "that's just the way things happen" is attested to in print in 1955. Other slang terms include "smart cookie" and "tough cookie." According to The Cambridge International Dictionary of Idioms, a smart cookie is "someone who is clever and good at dealing with difficult situations." The word "cookie" has been vulgar slang for "vagina" in the US since 1970. The word "cookies" is used to refer to the contents of the stomach, often in reference to vomiting (e.g., "pop your cookies" a 1960s expression, or "toss your cookies", a 1970s expression). The expression "cookie cutter", in addition to referring literally to a culinary device used to cut rolled cookie dough into shapes, is also used metaphorically to refer to items or things "having the same configuration or look as many others" (e.g., a "cookie cutter tract house") or to label something as "stereotyped or formulaic" (e.g., an action movie filled with "generic cookie cutter characters"). "Cookie duster" is a whimsical expression for a mustache.
Cookie Monster is a Muppet on the children's television show Sesame Street. He is best known for his voracious appetite for cookies and his famous eating phrases, such as "Me want cookie!", "Me eat cookie!" (or simply "COOKIE!"), and "Om nom nom nom" (said through a mouth full of food).
Cookie Clicker is a game where you click a cookie. | [
{
"paragraph_id": 0,
"text": "A cookie (American English), or a biscuit (British English), is a baked or cooked snack or dessert that is typically small, flat and sweet. It usually contains flour, sugar, egg, and some type of oil, fat, or butter. It may include other ingredients such as raisins, oats, chocolate chips, nuts, etc.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Most English-speaking countries call crunchy cookies \"biscuits\", except for the United States and Canada, where \"biscuit\" refers to a type of quick bread. Chewier biscuits are sometimes called \"cookies\" even in the United Kingdom. Some cookies may also be named by their shape, such as date squares or bars.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Biscuit or cookie variants include sandwich biscuits, such as custard creams, Jammie Dodgers, Bourbons and Oreos, with marshmallow or jam filling and sometimes dipped in chocolate or another sweet coating. Cookies are often served with beverages such as milk, coffee or tea and sometimes dunked, an approach which releases more flavour from confections by dissolving the sugars, while also softening their texture. Factory-made cookies are sold in grocery stores, convenience stores and vending machines. Fresh-baked cookies are sold at bakeries and coffeehouses.",
"title": ""
},
{
"paragraph_id": 3,
"text": "In many English-speaking countries outside North America, including the United Kingdom, the most common word for a crisp cookie is \"biscuit\". The term \"cookie\" is normally used to describe chewier ones. However, in many regions both terms are used. The container used to store cookies may be called a cookie jar.",
"title": "Terminology"
},
{
"paragraph_id": 4,
"text": "In Scotland, the term \"cookie\" is sometimes used to describe a plain bun.",
"title": "Terminology"
},
{
"paragraph_id": 5,
"text": "Cookies that are baked as a solid layer on a sheet pan and then cut, rather than being baked as individual pieces, are called bar cookies in American English or traybakes in British English .",
"title": "Terminology"
},
{
"paragraph_id": 6,
"text": "The word cookie dates from at least 1701 in Scottish usage where the word meant \"plain bun\", rather than thin baked good, and so it is not certain whether it is the same word. From 1808, the word \"cookie\" is attested \"...in the sense of \"small, flat, sweet cake\" in American English. The American use is derived from Dutch koekje \"little cake\", which is a diminutive of \"koek\" (\"cake\"), which came from the Middle Dutch word \"koke\". Another claim is that the American name derives from the Dutch word koekje or more precisely its informal, dialect variant koekie which means little cake, and arrived in American English with the Dutch settlement of New Netherland, in the early 1600s.",
"title": "Etymology"
},
{
"paragraph_id": 7,
"text": "According to the Scottish National Dictionary, its Scottish name may derive from the diminutive form (+ suffix -ie) of the word cook, giving the Middle Scots cookie, cooky or cu(c)kie. There was much trade and cultural contact across the North Sea between the Low Countries and Scotland during the Middle Ages, which can also be seen in the history of curling and, perhaps, golf.",
"title": "Etymology"
},
{
"paragraph_id": 8,
"text": "Cookies are most commonly baked until crisp or else for just long enough to ensure soft interior. Other types of cookies are not baked at all, such as varieties of peanut butter cookies that use solidified chocolate rather than set eggs and wheat gluten as a binder. Cookies are produced in a wide variety of styles, using an array of ingredients including sugars, spices, chocolate, butter, peanut butter, nuts, or dried fruits.",
"title": "Description"
},
{
"paragraph_id": 9,
"text": "A general theory of cookies may be formulated in the following way. Despite its descent from cakes and other sweetened breads, the cookie in almost all its forms has abandoned water as a medium for cohesion. Water in cakes serves to make the batter as thin as possible, the better to allow bubbles—responsible for a cake's fluffiness—to form. In the cookie the agent of cohesion has become some form of oil. Oils, whether in the form of butter, vegetable oils, or lard, are much more viscous than water and evaporate freely at a far higher temperature. Thus a cake made with butter or eggs in place of water is much denser after removal from the oven.",
"title": "Description"
},
{
"paragraph_id": 10,
"text": "Rather than evaporating as water does in a baking cake, oils in cookies remain. These oils saturate the cavities created during baking by bubbles of escaping gases. These gases are primarily composed of steam vaporized from the egg whites and the carbon dioxide released by heating the baking powder. This saturation produces the most texturally attractive feature of the cookie, and indeed all fried foods: crispness saturated with a moisture (namely oil) that does not render soggy the food it has soaked into.",
"title": "Description"
},
{
"paragraph_id": 11,
"text": "Cookie-like hard wafers have existed for as long as baking is documented, in part because they survive travel very well, but they were usually not sweet enough to be considered cookies by modern standards.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "Cookies appear to have their origins in 7th century AD Persia, shortly after the use of sugar became relatively common in the region. They spread to Europe through the Muslim conquest of Spain. By the 14th century, they were common in all levels of society throughout Europe, from royal cuisine to street vendors. The first documented instance of the figure-shaped gingerbread man was at the court of Elizabeth I of England in the 16th century. She had the gingerbread figures made and presented in the likeness of some of her important guests.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "With global travel becoming widespread at that time, cookies made a natural travel companion, a modernized equivalent of the travel cakes used throughout history. One of the most popular early cookies, which traveled especially well and became known on every continent by similar names, was the jumble, a relatively hard cookie made largely from nuts, sweetener, and water.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "Cookies came to America through the Dutch in New Amsterdam in the late 1620s. The Dutch word \"koekje\" was Anglicized to \"cookie\" or cooky. The earliest reference to cookies in America is in 1703, when \"The Dutch in New York provided...'in 1703...at a funeral 800 cookies...'\"",
"title": "History"
},
{
"paragraph_id": 15,
"text": "The most common modern cookie, given its style by the creaming of butter and sugar, was not common until the 18th century. The Industrial Revolution in Britain and the consumers it created saw cookies (biscuits) become products for the masses, and firms such as Huntley & Palmers (formed in 1822), McVitie's (formed in 1830) and Carr's (formed in 1831) were all established. The decorative biscuit tin, invented by Huntley & Palmers in 1831, saw British cookies exported around the world. In 1891, Cadbury filed a patent for a chocolate-coated cookie.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "Cookies are broadly classified according to how they are formed or made, including at least these categories:",
"title": "Classification"
},
{
"paragraph_id": 17,
"text": "Other types of cookies are classified for other reasons, such as their ingredients, size, or intended time of serving:",
"title": "Classification"
},
{
"paragraph_id": 18,
"text": "Leah Ettman from Nutrition Action has criticized the high calorie count and fat content of supersized cookies, which are extra large cookies; she cites the Panera Kitchen Sink Cookie, a supersized chocolate chip cookie, which measures 5 1/2 inches in diameter and has 800 calories. For busy people who eat breakfast cookies in the morning, Kate Bratskeir from the Huffington Post recommends lower-sugar cookies filled with \"heart-healthy nuts and fiber-rich oats\". A book on nutrition by Paul Insel et al. notes that \"low-fat\" or \"diet cookies\" may have the same number of calories as regular cookies, due to added sugar.",
"title": "Reception"
},
{
"paragraph_id": 19,
"text": "There are a number of slang usages of the term \"cookie\". The slang use of \"cookie\" to mean a person, \"especially an attractive woman\" is attested to in print since 1920. The catchphrase \"that's the way the cookie crumbles\", which means \"that's just the way things happen\" is attested to in print in 1955. Other slang terms include \"smart cookie\" and \"tough cookie.\" According to The Cambridge International Dictionary of Idioms, a smart cookie is \"someone who is clever and good at dealing with difficult situations.\" The word \"cookie\" has been vulgar slang for \"vagina\" in the US since 1970. The word \"cookies\" is used to refer to the contents of the stomach, often in reference to vomiting (e.g., \"pop your cookies\" a 1960s expression, or \"toss your cookies\", a 1970s expression). The expression \"cookie cutter\", in addition to referring literally to a culinary device used to cut rolled cookie dough into shapes, is also used metaphorically to refer to items or things \"having the same configuration or look as many others\" (e.g., a \"cookie cutter tract house\") or to label something as \"stereotyped or formulaic\" (e.g., an action movie filled with \"generic cookie cutter characters\"). \"Cookie duster\" is a whimsical expression for a mustache.",
"title": "Popular culture"
},
{
"paragraph_id": 20,
"text": "Cookie Monster is a Muppet on the children's television show Sesame Street. He is best known for his voracious appetite for cookies and his famous eating phrases, such as \"Me want cookie!\", \"Me eat cookie!\" (or simply \"COOKIE!\"), and \"Om nom nom nom\" (said through a mouth full of food).",
"title": "Popular culture"
},
{
"paragraph_id": 21,
"text": "Cookie Clicker is a game where you click a cookie.",
"title": "Popular culture"
}
] | A cookie, or a biscuit, is a baked or cooked snack or dessert that is typically small, flat and sweet. It usually contains flour, sugar, egg, and some type of oil, fat, or butter. It may include other ingredients such as raisins, oats, chocolate chips, nuts, etc. Most English-speaking countries call crunchy cookies "biscuits", except for the United States and Canada, where "biscuit" refers to a type of quick bread. Chewier biscuits are sometimes called "cookies" even in the United Kingdom. Some cookies may also be named by their shape, such as date squares or bars. Biscuit or cookie variants include sandwich biscuits, such as custard creams, Jammie Dodgers, Bourbons and Oreos, with marshmallow or jam filling and sometimes dipped in chocolate or another sweet coating. Cookies are often served with beverages such as milk, coffee or tea and sometimes dunked, an approach which releases more flavour from confections by dissolving the sugars, while also softening their texture. Factory-made cookies are sold in grocery stores, convenience stores and vending machines. Fresh-baked cookies are sold at bakeries and coffeehouses. | 2001-11-23T17:47:35Z | 2023-12-28T02:41:05Z | [
"Template:Div col",
"Template:Div col end",
"Template:Portal",
"Template:Cbignore",
"Template:Wiktionary-inline",
"Template:Authority control",
"Template:Anchor",
"Template:See also",
"Template:Iranian cuisine",
"Template:Hatgrp",
"Template:Citation needed",
"Template:'\"",
"Template:Cite news",
"Template:Cite encyclopedia",
"Template:Cite video",
"Template:Short description",
"Template:Lang",
"Template:Infobox food",
"Template:Reflist",
"Template:Cite web",
"Template:Cite magazine",
"Template:Cite book",
"Template:Fast food",
"Template:Distinguish",
"Template:Pp-vandalism"
] | https://en.wikipedia.org/wiki/Cookie |
7,220 | Common Gateway Interface | In computing, Common Gateway Interface (CGI) is an interface specification that enables web servers to execute an external program to process HTTP/S user requests.
Such programs are often written in a scripting language and are commonly referred to as CGI scripts, but they may include compiled programs.
A typical use case occurs when a web user submits a web form on a web page that uses CGI. The form's data is sent to the web server within an HTTP request with a URL denoting a CGI script. The web server then launches the CGI script in a new computer process, passing the form data to it. The output of the CGI script, usually in the form of HTML, is returned by the script to the Web server, and the server relays it back to the browser as its response to the browser's request.
Developed in the early 1990s, CGI was the earliest common method available that allowed a web page to be interactive. Due to a necessity to run CGI scripts in a separate process every time the request comes in from a client, various alternatives were developed.
In 1993, the National Center for Supercomputing Applications (NCSA) team wrote the specification for calling command line executables on the www-talk mailing list. The other Web server developers adopted it, and it has been a standard for Web servers ever since. A work group chaired by Ken Coar started in November 1997 to get the NCSA definition of CGI more formally defined. This work resulted in RFC 3875, which specified CGI Version 1.1. Specifically mentioned in the RFC are the following contributors:
Historically CGI programs were often written using the C programming language. RFC 3875 "The Common Gateway Interface (CGI)" partially defines CGI using C, in saying that environment variables "are accessed by the C library routine getenv() or variable environ".
The name CGI comes from the early days of the Web, where webmasters wanted to connect legacy information systems such as databases to their Web servers. The CGI program was executed by the server and provided a common "gateway" between the Web server and the legacy information system.
Traditionally a Web server has a directory which is designated as a document collection, that is, a set of files that can be sent to Web browsers connected to the server. For example, if a web server has the fully-qualified domain name www.example.com, and its document collection is stored at /usr/local/apache/htdocs/ in the local file system (its document root), then the web server will respond to a request for http://www.example.com/index.html by sending to the browser a copy of the file /usr/local/apache/htdocs/index.html (if it exists).
For pages constructed on the fly, the server software may defer requests to separate programs and relay the results to the requesting client (usually, a Web browser that displays the page to the end user).
Such programs usually require some additional information to be specified with the request, such as query strings or cookies. Conversely, upon returning, the script must provide all the information required by HTTP for a response to the request: the HTTP status of the request, the document content (if available), the document type (e.g. HTML, PDF, or plain text), et cetera.
Initially, there were no standardized methods for data exchange between a browser, the HTTP server with which it was communicating and the scripts on the server that were expected to process the data and ultimately return a result to the browser. As a result, mutual incompatibilities existed between different HTTP server variants that undermined script portability.
Recognition of this problem led to the specification of how data exchange was to be carried out, resulting in the development of CGI. Web page-generating programs invoked by server software that adheres to the CGI specification are known as CGI scripts, even though they may actually have been written in a non-scripting language, such as C.
The CGI specification was quickly adopted and continues to be supported by all well-known HTTP server packages, such as Apache, Microsoft IIS, and (with an extension) node.js-based servers.
An early use of CGI scripts was to process forms. In the beginning of HTML, HTML forms typically had an "action" attribute and a button designated as the "submit" button. When the submit button is pushed the URI specified in the "action" attribute would be sent to the server with the data from the form sent as a query string. If the "action" specifies a CGI script then the CGI script would be executed, the script in turn generating an HTML page.
A Web server that supports CGI can be configured to interpret a URL that it serves as a reference to a CGI script. A common convention is to have a cgi-bin/ directory at the base of the directory tree and treat all executable files within this directory (and no other, for security) as CGI scripts. When a Web browser requests a URL that points to a file within the CGI directory (e.g., http://example.com/cgi-bin/printenv.pl/with/additional/path?and=a&query=string), then, instead of simply sending that file (/usr/local/apache/htdocs/cgi-bin/printenv.pl) to the Web browser, the HTTP server runs the specified script and passes the output of the script to the Web browser. That is, anything that the script sends to standard output is passed to the Web client instead of being shown in the terminal window that started the web server. Another popular convention is to use filename extensions; for instance, if CGI scripts are consistently given the extension .cgi, the Web server can be configured to interpret all such files as CGI scripts. While convenient, and required by many prepackaged scripts, it opens the server to attack if a remote user can upload executable code with the proper extension.
The CGI specification defines how additional information passed with the request is passed to the script. The Web server creates a subset of the environment variables passed to it and adds details pertinent to the HTTP environment. For instance, if a slash and additional directory name(s) are appended to the URL immediately after the name of the script (in this example, /with/additional/path), then that path is stored in the PATH_INFO environment variable before the script is called. If parameters are sent to the script via an HTTP GET request (a question mark appended to the URL, followed by param=value pairs; in the example, ?and=a&query=string), then those parameters are stored in the QUERY_STRING environment variable before the script is called. Request HTTP message body, such as form parameters sent via an HTTP POST request, are passed to the script's standard input. The script can then read these environment variables or data from standard input and adapt to the Web browser's request.
CGI is often used to process input information from the user and produce the appropriate output. An example of a CGI program is one implementing a wiki. If the user agent requests the name of an entry, the Web server executes the CGI program. The CGI program retrieves the source of that entry's page (if one exists), transforms it into HTML, and prints the result. The Web server receives the output from the CGI program and transmits it to the user agent. Then if the user agent clicks the "Edit page" button, the CGI program populates an HTML textarea or other editing control with the page's contents. Finally if the user agent clicks the "Publish page" button, the CGI program transforms the updated HTML into the source of that entry's page and saves it.
CGI programs run, by default, in the security context of the Web server. When first introduced a number of example scripts were provided with the reference distributions of the NCSA, Apache and CERN Web servers to show how shell scripts or C programs could be coded to make use of the new CGI. One such example script was a CGI program called PHF that implemented a simple phone book.
In common with a number of other scripts at the time, this script made use of a function: escape_shell_cmd(). The function was supposed to sanitize its argument, which came from user input and then pass the input to the Unix shell, to be run in the security context of the Web server. The script did not correctly sanitize all input and allowed new lines to be passed to the shell, which effectively allowed multiple commands to be run. The results of these commands were then displayed on the Web server. If the security context of the Web server allowed it, malicious commands could be executed by attackers.
This was the first widespread example of a new type of Web based attack, where unsanitized data from Web users could lead to execution of code on a Web server. Because the example code was installed by default, attacks were widespread and led to a number of security advisories in early 1996.
For each incoming HTTP request, a Web server creates a new CGI process for handling it and destroys the CGI process after the HTTP request has been handled. Creating and destroying a process can be expensive: consume CPU time and memory resources than the actual work of generating the output of the process, especially when the CGI program still needs to be interpreted by a virtual machine. For a high number of HTTP requests, the resulting workload can quickly overwhelm the Web server.
The computational overhead involved in CGI process creation and destruction can be reduced by the following techniques:
The optimal configuration for any Web application depends on application-specific details, amount of traffic, and complexity of the transaction; these trade-offs need to be analyzed to determine the best implementation for a given task and time budget. Web frameworks offer an alternative to using CGI scripts to interact with user agents. | [
{
"paragraph_id": 0,
"text": "In computing, Common Gateway Interface (CGI) is an interface specification that enables web servers to execute an external program to process HTTP/S user requests.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Such programs are often written in a scripting language and are commonly referred to as CGI scripts, but they may include compiled programs.",
"title": ""
},
{
"paragraph_id": 2,
"text": "A typical use case occurs when a web user submits a web form on a web page that uses CGI. The form's data is sent to the web server within an HTTP request with a URL denoting a CGI script. The web server then launches the CGI script in a new computer process, passing the form data to it. The output of the CGI script, usually in the form of HTML, is returned by the script to the Web server, and the server relays it back to the browser as its response to the browser's request.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Developed in the early 1990s, CGI was the earliest common method available that allowed a web page to be interactive. Due to a necessity to run CGI scripts in a separate process every time the request comes in from a client, various alternatives were developed.",
"title": ""
},
{
"paragraph_id": 4,
"text": "In 1993, the National Center for Supercomputing Applications (NCSA) team wrote the specification for calling command line executables on the www-talk mailing list. The other Web server developers adopted it, and it has been a standard for Web servers ever since. A work group chaired by Ken Coar started in November 1997 to get the NCSA definition of CGI more formally defined. This work resulted in RFC 3875, which specified CGI Version 1.1. Specifically mentioned in the RFC are the following contributors:",
"title": "History"
},
{
"paragraph_id": 5,
"text": "Historically CGI programs were often written using the C programming language. RFC 3875 \"The Common Gateway Interface (CGI)\" partially defines CGI using C, in saying that environment variables \"are accessed by the C library routine getenv() or variable environ\".",
"title": "History"
},
{
"paragraph_id": 6,
"text": "The name CGI comes from the early days of the Web, where webmasters wanted to connect legacy information systems such as databases to their Web servers. The CGI program was executed by the server and provided a common \"gateway\" between the Web server and the legacy information system.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "Traditionally a Web server has a directory which is designated as a document collection, that is, a set of files that can be sent to Web browsers connected to the server. For example, if a web server has the fully-qualified domain name www.example.com, and its document collection is stored at /usr/local/apache/htdocs/ in the local file system (its document root), then the web server will respond to a request for http://www.example.com/index.html by sending to the browser a copy of the file /usr/local/apache/htdocs/index.html (if it exists).",
"title": "Purpose"
},
{
"paragraph_id": 8,
"text": "For pages constructed on the fly, the server software may defer requests to separate programs and relay the results to the requesting client (usually, a Web browser that displays the page to the end user).",
"title": "Purpose"
},
{
"paragraph_id": 9,
"text": "Such programs usually require some additional information to be specified with the request, such as query strings or cookies. Conversely, upon returning, the script must provide all the information required by HTTP for a response to the request: the HTTP status of the request, the document content (if available), the document type (e.g. HTML, PDF, or plain text), et cetera.",
"title": "Purpose"
},
{
"paragraph_id": 10,
"text": "Initially, there were no standardized methods for data exchange between a browser, the HTTP server with which it was communicating and the scripts on the server that were expected to process the data and ultimately return a result to the browser. As a result, mutual incompatibilities existed between different HTTP server variants that undermined script portability.",
"title": "Purpose"
},
{
"paragraph_id": 11,
"text": "Recognition of this problem led to the specification of how data exchange was to be carried out, resulting in the development of CGI. Web page-generating programs invoked by server software that adheres to the CGI specification are known as CGI scripts, even though they may actually have been written in a non-scripting language, such as C.",
"title": "Purpose"
},
{
"paragraph_id": 12,
"text": "The CGI specification was quickly adopted and continues to be supported by all well-known HTTP server packages, such as Apache, Microsoft IIS, and (with an extension) node.js-based servers.",
"title": "Purpose"
},
{
"paragraph_id": 13,
"text": "An early use of CGI scripts was to process forms. In the beginning of HTML, HTML forms typically had an \"action\" attribute and a button designated as the \"submit\" button. When the submit button is pushed the URI specified in the \"action\" attribute would be sent to the server with the data from the form sent as a query string. If the \"action\" specifies a CGI script then the CGI script would be executed, the script in turn generating an HTML page.",
"title": "Purpose"
},
{
"paragraph_id": 14,
"text": "A Web server that supports CGI can be configured to interpret a URL that it serves as a reference to a CGI script. A common convention is to have a cgi-bin/ directory at the base of the directory tree and treat all executable files within this directory (and no other, for security) as CGI scripts. When a Web browser requests a URL that points to a file within the CGI directory (e.g., http://example.com/cgi-bin/printenv.pl/with/additional/path?and=a&query=string), then, instead of simply sending that file (/usr/local/apache/htdocs/cgi-bin/printenv.pl) to the Web browser, the HTTP server runs the specified script and passes the output of the script to the Web browser. That is, anything that the script sends to standard output is passed to the Web client instead of being shown in the terminal window that started the web server. Another popular convention is to use filename extensions; for instance, if CGI scripts are consistently given the extension .cgi, the Web server can be configured to interpret all such files as CGI scripts. While convenient, and required by many prepackaged scripts, it opens the server to attack if a remote user can upload executable code with the proper extension.",
"title": "Deployment"
},
{
"paragraph_id": 15,
"text": "The CGI specification defines how additional information passed with the request is passed to the script. The Web server creates a subset of the environment variables passed to it and adds details pertinent to the HTTP environment. For instance, if a slash and additional directory name(s) are appended to the URL immediately after the name of the script (in this example, /with/additional/path), then that path is stored in the PATH_INFO environment variable before the script is called. If parameters are sent to the script via an HTTP GET request (a question mark appended to the URL, followed by param=value pairs; in the example, ?and=a&query=string), then those parameters are stored in the QUERY_STRING environment variable before the script is called. Request HTTP message body, such as form parameters sent via an HTTP POST request, are passed to the script's standard input. The script can then read these environment variables or data from standard input and adapt to the Web browser's request.",
"title": "Deployment"
},
{
"paragraph_id": 16,
"text": "CGI is often used to process input information from the user and produce the appropriate output. An example of a CGI program is one implementing a wiki. If the user agent requests the name of an entry, the Web server executes the CGI program. The CGI program retrieves the source of that entry's page (if one exists), transforms it into HTML, and prints the result. The Web server receives the output from the CGI program and transmits it to the user agent. Then if the user agent clicks the \"Edit page\" button, the CGI program populates an HTML textarea or other editing control with the page's contents. Finally if the user agent clicks the \"Publish page\" button, the CGI program transforms the updated HTML into the source of that entry's page and saves it.",
"title": "Uses"
},
{
"paragraph_id": 17,
"text": "CGI programs run, by default, in the security context of the Web server. When first introduced a number of example scripts were provided with the reference distributions of the NCSA, Apache and CERN Web servers to show how shell scripts or C programs could be coded to make use of the new CGI. One such example script was a CGI program called PHF that implemented a simple phone book.",
"title": "Security"
},
{
"paragraph_id": 18,
"text": "In common with a number of other scripts at the time, this script made use of a function: escape_shell_cmd(). The function was supposed to sanitize its argument, which came from user input and then pass the input to the Unix shell, to be run in the security context of the Web server. The script did not correctly sanitize all input and allowed new lines to be passed to the shell, which effectively allowed multiple commands to be run. The results of these commands were then displayed on the Web server. If the security context of the Web server allowed it, malicious commands could be executed by attackers.",
"title": "Security"
},
{
"paragraph_id": 19,
"text": "This was the first widespread example of a new type of Web based attack, where unsanitized data from Web users could lead to execution of code on a Web server. Because the example code was installed by default, attacks were widespread and led to a number of security advisories in early 1996.",
"title": "Security"
},
{
"paragraph_id": 20,
"text": "For each incoming HTTP request, a Web server creates a new CGI process for handling it and destroys the CGI process after the HTTP request has been handled. Creating and destroying a process can be expensive: consume CPU time and memory resources than the actual work of generating the output of the process, especially when the CGI program still needs to be interpreted by a virtual machine. For a high number of HTTP requests, the resulting workload can quickly overwhelm the Web server.",
"title": "Alternatives"
},
{
"paragraph_id": 21,
"text": "The computational overhead involved in CGI process creation and destruction can be reduced by the following techniques:",
"title": "Alternatives"
},
{
"paragraph_id": 22,
"text": "The optimal configuration for any Web application depends on application-specific details, amount of traffic, and complexity of the transaction; these trade-offs need to be analyzed to determine the best implementation for a given task and time budget. Web frameworks offer an alternative to using CGI scripts to interact with user agents.",
"title": "Alternatives"
},
{
"paragraph_id": 23,
"text": "",
"title": "External links"
}
] | In computing, Common Gateway Interface (CGI) is an interface specification that enables web servers to execute an external program to process HTTP/S user requests. Such programs are often written in a scripting language and are commonly referred to as CGI scripts, but they may include compiled programs. A typical use case occurs when a web user submits a web form on a web page that uses CGI. The form's data is sent to the web server within an HTTP request with a URL denoting a CGI script. The web server then launches the CGI script in a new computer process, passing the form data to it. The output of the CGI script, usually in the form of HTML, is returned by the script to the Web server, and the server relays it back to the browser as its response to the browser's request. Developed in the early 1990s, CGI was the earliest common method available that allowed a web page to be interactive. Due to a necessity to run CGI scripts in a separate process every time the request comes in from a client, various alternatives were developed. | 2001-11-23T19:07:28Z | 2023-12-03T02:29:07Z | [
"Template:Refs",
"Template:Cite journal",
"Template:Cite web",
"Template:Web interfaces",
"Template:Use dmy dates",
"Template:Annotated link",
"Template:More citations needed",
"Template:Cite mailing list",
"Template:Authority control",
"Template:Short description"
] | https://en.wikipedia.org/wiki/Common_Gateway_Interface |
7,222 | Choctaw | The Choctaw (in the Choctaw language, Chahta) are a Native American people originally based in the Southeastern Woodlands, in what is now Alabama and Mississippi. Their Choctaw language is a Western Muskogean language. Today, Choctaw people are enrolled in three federally recognized tribes: the Choctaw Nation of Oklahoma, Mississippi Band of Choctaw Indians, and Jena Band of Choctaw Indians in Louisiana.
The Choctaw were first noted by Europeans in French written records of 1675. Their mother mound is Nanih Waiya, a great earthwork platform mound located in central-east Mississippi. Early Spanish explorers of the mid-16th century in the Southeast encountered ancestral Mississippian culture villages and chiefs.
The Choctaw coalesced as a people in the 17th century and developed at least three distinct political and geographical divisions: eastern, western, and southern. These different groups sometimes created distinct, independent alliances with nearby European powers. These included the French, based on the Gulf Coast and in Louisiana; the English of the Southeast, and the Spanish of Florida and Louisiana during the colonial era.
Most Choctaw allied with the Americans during American Revolution, War of 1812, and the Red Stick War, most notably at the Battle of New Orleans. European Americans considered the Choctaw to be one of the "Five Civilized Tribes" of the Southeast. The Choctaw and the United States agreed to a total of nine treaties. By the last three, the US gained vast land cessions in the Southeast. As part of Indian Removal, despite not having waged war against the United States, the majority of Choctaw were forcibly relocated to Indian Territory from 1831 to 1833. The Choctaw government in Indian Territory had three districts, each with its own chief, who together with the town chiefs sat on their National Council.
Those Choctaw who chose to stay in the state of Mississippi were considered state and U.S. citizens; they were one of the first major non-European ethnic groups to be granted citizenship. Article 14 in the 1830 treaty with the Choctaw stated Choctaws may wish to become citizens of the United States under the 14th Article of the Treaty of Dancing Rabbit Creek on all of the combined lands which were consolidated under Article I from all previous treaties between the United States and the Choctaw.
During the American Civil War, the Choctaw in both Indian Territory and Mississippi mostly sided with the Confederate States of America. Under the late 19th-century Dawes Act and Curtis Acts, the US federal government broke up tribal land holdings and dissolved tribal governments in Indian Territory in order to extinguish Indian land claims before admission of Oklahoma as a state in 1907. From that period, for several decades the US Bureau of Indian Affairs appointed chiefs of the Choctaw and other tribes in the former Indian Territory.
During World War I, Choctaw soldiers served in the US military as some of the first Native American codetalkers, using the Choctaw language. Since the Indian Reorganization Act of 1934, the Choctaw people in three areas have reconstituted their governments and gained federal recognition. The largest are the Choctaw Nation in Oklahoma.
Since the 20th century, the Mississippi Band of Choctaw Indians were federally recognized in 1945, the Choctaw Nation of Oklahoma in 1971, and the Jena Band of Choctaw Indians in 1995.
The Choctaw autonym is Chahta. Choctaw is an anglization of Chahta, whose meaning is unknown. The anthropologist John R. Swanton suggested that the Choctaw derived their name from an early leader. Henry Halbert, a historian, suggests that their name is derived from the Choctaw phrase Hacha hatak (river people).
The Choctaw people are believed to have coalesced in the 17th century, perhaps from peoples from Alabama and the Plaquemine culture. Their culture continued to evolve in the Southeast. The Choctaw practiced Head flattening as a ritual adornment for its people, but the practice eventually fell out of favor. Some of their communities had extensive trade and interaction with Europeans, including people from Spain, France, and England greatly shaped it as well. After the United States was formed and its settlers began to move into the Southeast, the Choctaw were among the Five Civilized Tribes, who adopted some of their ways. They transitioned to yeoman farming methods, and accepted European Americans and African Americans into their society. In mid-summer the Mississippi Band of Choctaw Indians celebrate their traditional culture during the Choctaw Indian Fair with ball games, dancing, cooking and entertainment.
Within the Choctaws were two distinct moieties: Imoklashas (elders) and Inhulalatas (youth). Each moiety had several clans or Iksas; it is estimated there were about 12 Iksas altogether. The people had a matrilineal kinship system, with children born into the clan or iksa of the mother and taking their social status from it. In this system, their maternal uncles had important roles. Identity was established first by moiety and iksa; so a Choctaw first identified as Imoklasha or Inhulata, and second as Choctaw. Children belonged to the Iksa of their mother. The following were some major districts:
By the early 1930s, the anthropologist John Swanton wrote of the Choctaw: "[T]here are only the faintest traces of groups with truly totemic designations, the animal and plant names which occur seeming not to have had a totemic connotation." Swanton wrote, "Adam Hodgson ... told ... that there were tribes or families among the Indians, somewhat similar to the Scottish clans; such as, the Panther family, the Bird family, Raccoon Family, the Wolf family." The following are possible totemic clan designations:
Choctaw stickball, the oldest field sport in North America, was also known as the "little brother of war" because of its roughness and substitution for war. When disputes arose between Choctaw communities, stickball provided a civil way to settle issues. The stickball games would involve as few as twenty or as many as 300 players. The goal posts could be from a few hundred feet apart to a few miles. Goal posts were sometimes located within each opposing team's village. A Jesuit priest referenced stickball in 1729, and George Catlin painted the subject. The Mississippi Band of Choctaw Indians continue to practice the sport.
Chunkey was a game using a disk-shaped stone that was about 1–2 inches in length. Players would throw the disk down a 200-foot (61 m) corridor so that it could roll past the players at great speed. As the disk rolled down the corridor, players would throw wooden shafts at it. The object of the game was to strike the disk or prevent your opponents from hitting it.
Other games included using corn, cane, and moccasins. The corn game used five to seven kernels of corn. One side was blackened and the other side white. Players won points based on each color. One point was awarded for the black side and 5–7 points for the white side. There were usually only two players.
The Choctaw language is a member of the Muskogean family and was well known among the frontiersmen, such as Andrew Jackson and William Henry Harrison, of the early 19th century. The language is closely related to Chickasaw, and some linguists consider the two as dialects of a single language. The Choctaw language is the essence of tribal culture, tradition, and identity. Many Choctaw adults learned to speak the language before speaking English. The language is a part of daily life on the Mississippi Choctaw reservation. The following table is an example of Choctaw text and its translation:
The Choctaw believed in a good spirit and an evil spirit. They may have been sun, or Hvshtahli, worshippers. The historian John Swanton wrote,
[T]he Choctaws anciently regarded the sun as a deity ... the sun was ascribed the power of life and death. He was represented as looking down upon the earth, and as long as he kept his flaming eye fixed on any one, the person was safe ... fire, as the most striking representation of the sun, was considered as possessing intelligence, and as acting in concert with the sun ... [having] constant intercourse with the sun ...
The word nanpisa (the one who sees) expressed the reverence the Choctaw had for the sun.
Anthropologist theorize that the Mississippian ancestors of the Choctaw placed the sun at the center of their cosmological system. Mid-eighteenth-century Choctaws did view the sun as a being endowed with life. Choctaw diplomats, for example, spoke only on sunny days. If the day of a conference were cloudy or rainy, Choctaws delayed the meeting until the sun returned, usually on the pretext that they needed more time to discuss particulars. They believed the sun made sure that all talks were honest. The sun as a symbol of great power and reverence is a major component of southeastern Indian cultures.
Choctaw prophets were known to have addressed the sun. John Swanton wrote, "an old Choctaw informed Wright that before the arrival of the missionaries, they had no conception of prayer. He added, "I have indeed heard it asserted by some, that anciently their hopaii, or prophets, on some occasions were accustomed to address the sun ..."
The colorful dresses worn by today's Choctaw are made by hand. They are based on designs of their ancestors, who adapted 19th-century European-American styles to their needs. Today many Choctaw wear such traditional clothing mainly for special events. Choctaw elders, especially the women, dress in their traditional garb every day. Choctaw dresses are trimmed by full diamond, half diamond or circle, and crosses that represent stickball sticks.
Early Choctaw communities worked communally and shared their harvest. They had trouble understanding why English settlers allowed their poor to suffer from hunger. In Ireland, the generosity of the Choctaw nation during their Great Famine in the mid-nineteenth century is remembered to this day and recently marked by a sculpture, 'Kindred Spirits', in a park at Midleton, Cork.
Land was the most valuable asset, which the Native Americans held in collective stewardship. The United States systematically obtained Choctaw land for conventional European-American settlement through treaties, legislation, and threats of warfare. Although the Choctaw made treaties with Great Britain, France, Spain, and the Confederate States of America; the nation signed only nine treaties with the United States. Some treaties which the US made with other nations, such as the Treaty of San Lorenzo, indirectly affected the Choctaw.
Reservations can be found in Louisiana (Jena Band of Choctaw Indians), Mississippi (Mississippi Band of Choctaw Indians), and Oklahoma (Choctaw Nation of Oklahoma). The Oklahoma reservation is defined by treaty. Other population centers can be found throughout the United States. | [
{
"paragraph_id": 0,
"text": "The Choctaw (in the Choctaw language, Chahta) are a Native American people originally based in the Southeastern Woodlands, in what is now Alabama and Mississippi. Their Choctaw language is a Western Muskogean language. Today, Choctaw people are enrolled in three federally recognized tribes: the Choctaw Nation of Oklahoma, Mississippi Band of Choctaw Indians, and Jena Band of Choctaw Indians in Louisiana.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The Choctaw were first noted by Europeans in French written records of 1675. Their mother mound is Nanih Waiya, a great earthwork platform mound located in central-east Mississippi. Early Spanish explorers of the mid-16th century in the Southeast encountered ancestral Mississippian culture villages and chiefs.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The Choctaw coalesced as a people in the 17th century and developed at least three distinct political and geographical divisions: eastern, western, and southern. These different groups sometimes created distinct, independent alliances with nearby European powers. These included the French, based on the Gulf Coast and in Louisiana; the English of the Southeast, and the Spanish of Florida and Louisiana during the colonial era.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Most Choctaw allied with the Americans during American Revolution, War of 1812, and the Red Stick War, most notably at the Battle of New Orleans. European Americans considered the Choctaw to be one of the \"Five Civilized Tribes\" of the Southeast. The Choctaw and the United States agreed to a total of nine treaties. By the last three, the US gained vast land cessions in the Southeast. As part of Indian Removal, despite not having waged war against the United States, the majority of Choctaw were forcibly relocated to Indian Territory from 1831 to 1833. The Choctaw government in Indian Territory had three districts, each with its own chief, who together with the town chiefs sat on their National Council.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Those Choctaw who chose to stay in the state of Mississippi were considered state and U.S. citizens; they were one of the first major non-European ethnic groups to be granted citizenship. Article 14 in the 1830 treaty with the Choctaw stated Choctaws may wish to become citizens of the United States under the 14th Article of the Treaty of Dancing Rabbit Creek on all of the combined lands which were consolidated under Article I from all previous treaties between the United States and the Choctaw.",
"title": ""
},
{
"paragraph_id": 5,
"text": "During the American Civil War, the Choctaw in both Indian Territory and Mississippi mostly sided with the Confederate States of America. Under the late 19th-century Dawes Act and Curtis Acts, the US federal government broke up tribal land holdings and dissolved tribal governments in Indian Territory in order to extinguish Indian land claims before admission of Oklahoma as a state in 1907. From that period, for several decades the US Bureau of Indian Affairs appointed chiefs of the Choctaw and other tribes in the former Indian Territory.",
"title": ""
},
{
"paragraph_id": 6,
"text": "During World War I, Choctaw soldiers served in the US military as some of the first Native American codetalkers, using the Choctaw language. Since the Indian Reorganization Act of 1934, the Choctaw people in three areas have reconstituted their governments and gained federal recognition. The largest are the Choctaw Nation in Oklahoma.",
"title": ""
},
{
"paragraph_id": 7,
"text": "Since the 20th century, the Mississippi Band of Choctaw Indians were federally recognized in 1945, the Choctaw Nation of Oklahoma in 1971, and the Jena Band of Choctaw Indians in 1995.",
"title": ""
},
{
"paragraph_id": 8,
"text": "The Choctaw autonym is Chahta. Choctaw is an anglization of Chahta, whose meaning is unknown. The anthropologist John R. Swanton suggested that the Choctaw derived their name from an early leader. Henry Halbert, a historian, suggests that their name is derived from the Choctaw phrase Hacha hatak (river people).",
"title": "Etymology"
},
{
"paragraph_id": 9,
"text": "The Choctaw people are believed to have coalesced in the 17th century, perhaps from peoples from Alabama and the Plaquemine culture. Their culture continued to evolve in the Southeast. The Choctaw practiced Head flattening as a ritual adornment for its people, but the practice eventually fell out of favor. Some of their communities had extensive trade and interaction with Europeans, including people from Spain, France, and England greatly shaped it as well. After the United States was formed and its settlers began to move into the Southeast, the Choctaw were among the Five Civilized Tribes, who adopted some of their ways. They transitioned to yeoman farming methods, and accepted European Americans and African Americans into their society. In mid-summer the Mississippi Band of Choctaw Indians celebrate their traditional culture during the Choctaw Indian Fair with ball games, dancing, cooking and entertainment.",
"title": "Culture"
},
{
"paragraph_id": 10,
"text": "Within the Choctaws were two distinct moieties: Imoklashas (elders) and Inhulalatas (youth). Each moiety had several clans or Iksas; it is estimated there were about 12 Iksas altogether. The people had a matrilineal kinship system, with children born into the clan or iksa of the mother and taking their social status from it. In this system, their maternal uncles had important roles. Identity was established first by moiety and iksa; so a Choctaw first identified as Imoklasha or Inhulata, and second as Choctaw. Children belonged to the Iksa of their mother. The following were some major districts:",
"title": "Culture"
},
{
"paragraph_id": 11,
"text": "By the early 1930s, the anthropologist John Swanton wrote of the Choctaw: \"[T]here are only the faintest traces of groups with truly totemic designations, the animal and plant names which occur seeming not to have had a totemic connotation.\" Swanton wrote, \"Adam Hodgson ... told ... that there were tribes or families among the Indians, somewhat similar to the Scottish clans; such as, the Panther family, the Bird family, Raccoon Family, the Wolf family.\" The following are possible totemic clan designations:",
"title": "Culture"
},
{
"paragraph_id": 12,
"text": "Choctaw stickball, the oldest field sport in North America, was also known as the \"little brother of war\" because of its roughness and substitution for war. When disputes arose between Choctaw communities, stickball provided a civil way to settle issues. The stickball games would involve as few as twenty or as many as 300 players. The goal posts could be from a few hundred feet apart to a few miles. Goal posts were sometimes located within each opposing team's village. A Jesuit priest referenced stickball in 1729, and George Catlin painted the subject. The Mississippi Band of Choctaw Indians continue to practice the sport.",
"title": "Culture"
},
{
"paragraph_id": 13,
"text": "Chunkey was a game using a disk-shaped stone that was about 1–2 inches in length. Players would throw the disk down a 200-foot (61 m) corridor so that it could roll past the players at great speed. As the disk rolled down the corridor, players would throw wooden shafts at it. The object of the game was to strike the disk or prevent your opponents from hitting it.",
"title": "Culture"
},
{
"paragraph_id": 14,
"text": "Other games included using corn, cane, and moccasins. The corn game used five to seven kernels of corn. One side was blackened and the other side white. Players won points based on each color. One point was awarded for the black side and 5–7 points for the white side. There were usually only two players.",
"title": "Culture"
},
{
"paragraph_id": 15,
"text": "The Choctaw language is a member of the Muskogean family and was well known among the frontiersmen, such as Andrew Jackson and William Henry Harrison, of the early 19th century. The language is closely related to Chickasaw, and some linguists consider the two as dialects of a single language. The Choctaw language is the essence of tribal culture, tradition, and identity. Many Choctaw adults learned to speak the language before speaking English. The language is a part of daily life on the Mississippi Choctaw reservation. The following table is an example of Choctaw text and its translation:",
"title": "Culture"
},
{
"paragraph_id": 16,
"text": "The Choctaw believed in a good spirit and an evil spirit. They may have been sun, or Hvshtahli, worshippers. The historian John Swanton wrote,",
"title": "Culture"
},
{
"paragraph_id": 17,
"text": "[T]he Choctaws anciently regarded the sun as a deity ... the sun was ascribed the power of life and death. He was represented as looking down upon the earth, and as long as he kept his flaming eye fixed on any one, the person was safe ... fire, as the most striking representation of the sun, was considered as possessing intelligence, and as acting in concert with the sun ... [having] constant intercourse with the sun ...",
"title": "Culture"
},
{
"paragraph_id": 18,
"text": "The word nanpisa (the one who sees) expressed the reverence the Choctaw had for the sun.",
"title": "Culture"
},
{
"paragraph_id": 19,
"text": "Anthropologist theorize that the Mississippian ancestors of the Choctaw placed the sun at the center of their cosmological system. Mid-eighteenth-century Choctaws did view the sun as a being endowed with life. Choctaw diplomats, for example, spoke only on sunny days. If the day of a conference were cloudy or rainy, Choctaws delayed the meeting until the sun returned, usually on the pretext that they needed more time to discuss particulars. They believed the sun made sure that all talks were honest. The sun as a symbol of great power and reverence is a major component of southeastern Indian cultures.",
"title": "Culture"
},
{
"paragraph_id": 20,
"text": "Choctaw prophets were known to have addressed the sun. John Swanton wrote, \"an old Choctaw informed Wright that before the arrival of the missionaries, they had no conception of prayer. He added, \"I have indeed heard it asserted by some, that anciently their hopaii, or prophets, on some occasions were accustomed to address the sun ...\"",
"title": "Culture"
},
{
"paragraph_id": 21,
"text": "The colorful dresses worn by today's Choctaw are made by hand. They are based on designs of their ancestors, who adapted 19th-century European-American styles to their needs. Today many Choctaw wear such traditional clothing mainly for special events. Choctaw elders, especially the women, dress in their traditional garb every day. Choctaw dresses are trimmed by full diamond, half diamond or circle, and crosses that represent stickball sticks.",
"title": "Culture"
},
{
"paragraph_id": 22,
"text": "Early Choctaw communities worked communally and shared their harvest. They had trouble understanding why English settlers allowed their poor to suffer from hunger. In Ireland, the generosity of the Choctaw nation during their Great Famine in the mid-nineteenth century is remembered to this day and recently marked by a sculpture, 'Kindred Spirits', in a park at Midleton, Cork.",
"title": "Culture"
},
{
"paragraph_id": 23,
"text": "Land was the most valuable asset, which the Native Americans held in collective stewardship. The United States systematically obtained Choctaw land for conventional European-American settlement through treaties, legislation, and threats of warfare. Although the Choctaw made treaties with Great Britain, France, Spain, and the Confederate States of America; the nation signed only nine treaties with the United States. Some treaties which the US made with other nations, such as the Treaty of San Lorenzo, indirectly affected the Choctaw.",
"title": "Treaties"
},
{
"paragraph_id": 24,
"text": "Reservations can be found in Louisiana (Jena Band of Choctaw Indians), Mississippi (Mississippi Band of Choctaw Indians), and Oklahoma (Choctaw Nation of Oklahoma). The Oklahoma reservation is defined by treaty. Other population centers can be found throughout the United States.",
"title": "Reservations"
}
] | The Choctaw are a Native American people originally based in the Southeastern Woodlands, in what is now Alabama and Mississippi. Their Choctaw language is a Western Muskogean language. Today, Choctaw people are enrolled in three federally recognized tribes: the Choctaw Nation of Oklahoma, Mississippi Band of Choctaw Indians, and Jena Band of Choctaw Indians in Louisiana. The Choctaw were first noted by Europeans in French written records of 1675. Their mother mound is Nanih Waiya, a great earthwork platform mound located in central-east Mississippi. Early Spanish explorers of the mid-16th century in the Southeast encountered ancestral Mississippian culture villages and chiefs. The Choctaw coalesced as a people in the 17th century and developed at least three distinct political and geographical divisions: eastern, western, and southern. These different groups sometimes created distinct, independent alliances with nearby European powers. These included the French, based on the Gulf Coast and in Louisiana; the English of the Southeast, and the Spanish of Florida and Louisiana during the colonial era. Most Choctaw allied with the Americans during American Revolution, War of 1812, and the Red Stick War, most notably at the Battle of New Orleans. European Americans considered the Choctaw to be one of the "Five Civilized Tribes" of the Southeast. The Choctaw and the United States agreed to a total of nine treaties. By the last three, the US gained vast land cessions in the Southeast. As part of Indian Removal, despite not having waged war against the United States, the majority of Choctaw were forcibly relocated to Indian Territory from 1831 to 1833. The Choctaw government in Indian Territory had three districts, each with its own chief, who together with the town chiefs sat on their National Council. Those Choctaw who chose to stay in the state of Mississippi were considered state and U.S. citizens; they were one of the first major non-European ethnic groups to be granted citizenship. Article 14 in the 1830 treaty with the Choctaw stated Choctaws may wish to become citizens of the United States under the 14th Article of the Treaty of Dancing Rabbit Creek on all of the combined lands which were consolidated under Article I from all previous treaties between the United States and the Choctaw. During the American Civil War, the Choctaw in both Indian Territory and Mississippi mostly sided with the Confederate States of America. Under the late 19th-century Dawes Act and Curtis Acts, the US federal government broke up tribal land holdings and dissolved tribal governments in Indian Territory in order to extinguish Indian land claims before admission of Oklahoma as a state in 1907. From that period, for several decades the US Bureau of Indian Affairs appointed chiefs of the Choctaw and other tribes in the former Indian Territory. During World War I, Choctaw soldiers served in the US military as some of the first Native American codetalkers, using the Choctaw language. Since the Indian Reorganization Act of 1934, the Choctaw people in three areas have reconstituted their governments and gained federal recognition. The largest are the Choctaw Nation in Oklahoma. Since the 20th century, the Mississippi Band of Choctaw Indians were federally recognized in 1945, the Choctaw Nation of Oklahoma in 1971, and the Jena Band of Choctaw Indians in 1995. | 2002-02-25T15:43:11Z | 2023-11-27T21:51:40Z | [
"Template:Main",
"Template:Cite book",
"Template:Refend",
"Template:Authority control",
"Template:Short description",
"Template:Other uses",
"Template:Infobox ethnic group",
"Template:Reflist",
"Template:Cite web",
"Template:Cite news",
"Template:Infobox ethnonym",
"Template:Convert",
"Template:Blockquote",
"Template:Choctaw",
"Template:Circa",
"Template:Portal",
"Template:ISBN",
"Template:Refbegin",
"Template:Cite EB1911",
"Template:Rp",
"Template:Further",
"Template:Commons category"
] | https://en.wikipedia.org/wiki/Choctaw |
7,224 | Calypso | Calypso, Calipso or Kalypso may refer to: | [
{
"paragraph_id": 0,
"text": "Calypso, Calipso or Kalypso may refer to:",
"title": ""
}
] | Calypso, Calipso or Kalypso may refer to: | 2001-11-23T22:14:51Z | 2023-11-08T13:25:06Z | [
"Template:Ship",
"Template:Disambiguation",
"Template:Wiktionary",
"Template:TOC right"
] | https://en.wikipedia.org/wiki/Calypso |
7,225 | Chemical affinity | In chemical physics and physical chemistry, chemical affinity is the electronic property by which dissimilar chemical species are capable of forming chemical compounds. Chemical affinity can also refer to the tendency of an atom or compound to combine by chemical reaction with atoms or compounds of unlike composition.
The idea of affinity is extremely old. Many attempts have been made at identifying its origins. The majority of such attempts, however, except in a general manner, end in futility since "affinities" lie at the basis of all magic, thereby pre-dating science. Physical chemistry, however, was one of the first branches of science to study and formulate a "theory of affinity". The name affinitas was first used in the sense of chemical relation by German philosopher Albertus Magnus near the year 1250. Later, those as Robert Boyle, John Mayow, Johann Glauber, Isaac Newton, and Georg Stahl put forward ideas on elective affinity in attempts to explain how heat is evolved during combustion reactions.
The term affinity has been used figuratively since c. 1600 in discussions of structural relationships in chemistry, philology, etc., and reference to "natural attraction" is from 1616. "Chemical affinity", historically, has referred to the "force" that causes chemical reactions. as well as, more generally, and earlier, the ″tendency to combine″ of any pair of substances. The broad definition, used generally throughout history, is that chemical affinity is that whereby substances enter into or resist decomposition.
The modern term chemical affinity is a somewhat modified variation of its eighteenth-century precursor "elective affinity" or elective attractions, a term that was used by the 18th century chemistry lecturer William Cullen. Whether Cullen coined the phrase is not clear, but his usage seems to predate most others, although it rapidly became widespread across Europe, and was used in particular by the Swedish chemist Torbern Olof Bergman throughout his book De attractionibus electivis (1775). Affinity theories were used in one way or another by most chemists from around the middle of the 18th century into the 19th century to explain and organise the different combinations into which substances could enter and from which they could be retrieved. Antoine Lavoisier, in his famed 1789 Traité Élémentaire de Chimie (Elements of Chemistry), refers to Bergman's work and discusses the concept of elective affinities or attractions.
According to chemistry historian Henry Leicester, the influential 1923 textbook Thermodynamics and the Free Energy of Chemical Reactions by Gilbert N. Lewis and Merle Randall led to the replacement of the term "affinity" by the term "free energy" in much of the English-speaking world.
According to Prigogine, the term was introduced and developed by Théophile de Donder.
Goethe used the concept in his novel Elective Affinities (1809).
The affinity concept was very closely linked to the visual representation of substances on a table. The first-ever affinity table, which was based on displacement reactions, was published in 1718 by the French chemist Étienne François Geoffroy. Geoffroy's name is best known in connection with these tables of "affinities" (tables des rapports), which were first presented to the French Academy of Sciences in 1718 and 1720.
During the 18th century many versions of the table were proposed with leading chemists like Torbern Bergman in Sweden and Joseph Black in Scotland adapting it to accommodate new chemical discoveries. All the tables were essentially lists, prepared by collating observations on the actions of substances one upon another, showing the varying degrees of affinity exhibited by analogous bodies for different reagents.
Crucially, the table was the central graphic tool used to teach chemistry to students and its visual arrangement was often combined with other kinds diagrams. Joseph Black, for example, used the table in combination with chiastic and circlet diagrams to visualise the core principles of chemical affinity. Affinity tables were used throughout Europe until the early 19th century when they were displaced by affinity concepts introduced by Claude Berthollet.
In chemical physics and physical chemistry, chemical affinity is the electronic property by which dissimilar chemical species are capable of forming chemical compounds. Chemical affinity can also refer to the tendency of an atom or compound to combine by chemical reaction with atoms or compounds of unlike composition.
In modern terms, we relate affinity to the phenomenon whereby certain atoms or molecules have the tendency to aggregate or bond. For example, in the 1919 book Chemistry of Human Life physician George W. Carey states that, "Health depends on a proper amount of iron phosphate Fe3(PO4)2 in the blood, for the molecules of this salt have chemical affinity for oxygen and carry it to all parts of the organism." In this antiquated context, chemical affinity is sometimes found synonymous with the term "magnetic attraction". Many writings, up until about 1925, also refer to a "law of chemical affinity".
Ilya Prigogine summarized the concept of affinity, saying, "All chemical reactions drive the system to a state of equilibrium in which the affinities of the reactions vanish."
The present IUPAC definition is that affinity A is the negative partial derivative of Gibbs free energy G with respect to extent of reaction ξ at constant pressure and temperature. That is,
It follows that affinity is positive for spontaneous reactions.
In 1923, the Belgian mathematician and physicist Théophile de Donder derived a relation between affinity and the Gibbs free energy of a chemical reaction. Through a series of derivations, de Donder showed that if we consider a mixture of chemical species with the possibility of chemical reaction, it can be proven that the following relation holds:
With the writings of Théophile de Donder as precedent, Ilya Prigogine and Defay in Chemical Thermodynamics (1954) defined chemical affinity as the rate of change of the uncompensated heat of reaction Q' as the reaction progress variable or reaction extent ξ grows infinitesimally:
This definition is useful for quantifying the factors responsible both for the state of equilibrium systems (where A = 0), and for changes of state of non-equilibrium systems (where A ≠ 0). | [
{
"paragraph_id": 0,
"text": "In chemical physics and physical chemistry, chemical affinity is the electronic property by which dissimilar chemical species are capable of forming chemical compounds. Chemical affinity can also refer to the tendency of an atom or compound to combine by chemical reaction with atoms or compounds of unlike composition.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The idea of affinity is extremely old. Many attempts have been made at identifying its origins. The majority of such attempts, however, except in a general manner, end in futility since \"affinities\" lie at the basis of all magic, thereby pre-dating science. Physical chemistry, however, was one of the first branches of science to study and formulate a \"theory of affinity\". The name affinitas was first used in the sense of chemical relation by German philosopher Albertus Magnus near the year 1250. Later, those as Robert Boyle, John Mayow, Johann Glauber, Isaac Newton, and Georg Stahl put forward ideas on elective affinity in attempts to explain how heat is evolved during combustion reactions.",
"title": "History"
},
{
"paragraph_id": 2,
"text": "The term affinity has been used figuratively since c. 1600 in discussions of structural relationships in chemistry, philology, etc., and reference to \"natural attraction\" is from 1616. \"Chemical affinity\", historically, has referred to the \"force\" that causes chemical reactions. as well as, more generally, and earlier, the ″tendency to combine″ of any pair of substances. The broad definition, used generally throughout history, is that chemical affinity is that whereby substances enter into or resist decomposition.",
"title": "History"
},
{
"paragraph_id": 3,
"text": "The modern term chemical affinity is a somewhat modified variation of its eighteenth-century precursor \"elective affinity\" or elective attractions, a term that was used by the 18th century chemistry lecturer William Cullen. Whether Cullen coined the phrase is not clear, but his usage seems to predate most others, although it rapidly became widespread across Europe, and was used in particular by the Swedish chemist Torbern Olof Bergman throughout his book De attractionibus electivis (1775). Affinity theories were used in one way or another by most chemists from around the middle of the 18th century into the 19th century to explain and organise the different combinations into which substances could enter and from which they could be retrieved. Antoine Lavoisier, in his famed 1789 Traité Élémentaire de Chimie (Elements of Chemistry), refers to Bergman's work and discusses the concept of elective affinities or attractions.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "According to chemistry historian Henry Leicester, the influential 1923 textbook Thermodynamics and the Free Energy of Chemical Reactions by Gilbert N. Lewis and Merle Randall led to the replacement of the term \"affinity\" by the term \"free energy\" in much of the English-speaking world.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "According to Prigogine, the term was introduced and developed by Théophile de Donder.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "Goethe used the concept in his novel Elective Affinities (1809).",
"title": "History"
},
{
"paragraph_id": 7,
"text": "The affinity concept was very closely linked to the visual representation of substances on a table. The first-ever affinity table, which was based on displacement reactions, was published in 1718 by the French chemist Étienne François Geoffroy. Geoffroy's name is best known in connection with these tables of \"affinities\" (tables des rapports), which were first presented to the French Academy of Sciences in 1718 and 1720.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "During the 18th century many versions of the table were proposed with leading chemists like Torbern Bergman in Sweden and Joseph Black in Scotland adapting it to accommodate new chemical discoveries. All the tables were essentially lists, prepared by collating observations on the actions of substances one upon another, showing the varying degrees of affinity exhibited by analogous bodies for different reagents.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "Crucially, the table was the central graphic tool used to teach chemistry to students and its visual arrangement was often combined with other kinds diagrams. Joseph Black, for example, used the table in combination with chiastic and circlet diagrams to visualise the core principles of chemical affinity. Affinity tables were used throughout Europe until the early 19th century when they were displaced by affinity concepts introduced by Claude Berthollet.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "In chemical physics and physical chemistry, chemical affinity is the electronic property by which dissimilar chemical species are capable of forming chemical compounds. Chemical affinity can also refer to the tendency of an atom or compound to combine by chemical reaction with atoms or compounds of unlike composition.",
"title": "Modern conceptions"
},
{
"paragraph_id": 11,
"text": "In modern terms, we relate affinity to the phenomenon whereby certain atoms or molecules have the tendency to aggregate or bond. For example, in the 1919 book Chemistry of Human Life physician George W. Carey states that, \"Health depends on a proper amount of iron phosphate Fe3(PO4)2 in the blood, for the molecules of this salt have chemical affinity for oxygen and carry it to all parts of the organism.\" In this antiquated context, chemical affinity is sometimes found synonymous with the term \"magnetic attraction\". Many writings, up until about 1925, also refer to a \"law of chemical affinity\".",
"title": "Modern conceptions"
},
{
"paragraph_id": 12,
"text": "Ilya Prigogine summarized the concept of affinity, saying, \"All chemical reactions drive the system to a state of equilibrium in which the affinities of the reactions vanish.\"",
"title": "Modern conceptions"
},
{
"paragraph_id": 13,
"text": "The present IUPAC definition is that affinity A is the negative partial derivative of Gibbs free energy G with respect to extent of reaction ξ at constant pressure and temperature. That is,",
"title": "Thermodynamics"
},
{
"paragraph_id": 14,
"text": "It follows that affinity is positive for spontaneous reactions.",
"title": "Thermodynamics"
},
{
"paragraph_id": 15,
"text": "In 1923, the Belgian mathematician and physicist Théophile de Donder derived a relation between affinity and the Gibbs free energy of a chemical reaction. Through a series of derivations, de Donder showed that if we consider a mixture of chemical species with the possibility of chemical reaction, it can be proven that the following relation holds:",
"title": "Thermodynamics"
},
{
"paragraph_id": 16,
"text": "With the writings of Théophile de Donder as precedent, Ilya Prigogine and Defay in Chemical Thermodynamics (1954) defined chemical affinity as the rate of change of the uncompensated heat of reaction Q' as the reaction progress variable or reaction extent ξ grows infinitesimally:",
"title": "Thermodynamics"
},
{
"paragraph_id": 17,
"text": "This definition is useful for quantifying the factors responsible both for the state of equilibrium systems (where A = 0), and for changes of state of non-equilibrium systems (where A ≠ 0).",
"title": "Thermodynamics"
}
] | In chemical physics and physical chemistry, chemical affinity is the electronic property by which dissimilar chemical species are capable of forming chemical compounds. Chemical affinity can also refer to the tendency of an atom or compound to combine by chemical reaction with atoms or compounds of unlike composition. | 2002-02-25T15:51:15Z | 2023-09-07T08:36:54Z | [
"Template:Nowrap",
"Template:Harvnb",
"Template:Cite journal",
"Template:Cite web",
"Template:Lead too short",
"Template:Lang",
"Template:Reflist",
"Template:Cite book",
"Template:ISBN",
"Template:EB1911"
] | https://en.wikipedia.org/wiki/Chemical_affinity |
7,227 | Comet Hale–Bopp | Comet Hale–Bopp (formally designated C/1995 O1) is a comet that was one of the most widely observed of the 20th century and one of the brightest seen for many decades.
Alan Hale and Thomas Bopp discovered Comet Hale–Bopp separately on July 23, 1995, before it became visible to the naked eye. It is difficult to predict the maximum brightness of new comets with any degree of certainty, but Hale–Bopp exceeded most predictions when it passed perihelion on April 1, 1997, reaching about magnitude −1.8. It was visible to the naked eye for a record 18 months, due to its massive nucleus size. This is twice as long as the Great Comet of 1811, the previous record holder. Accordingly, Hale–Bopp was dubbed the great comet of 1997.
The comet was discovered independently on July 23, 1995, by two observers, Alan Hale and Thomas Bopp, both in the United States.
Hale had spent many hundreds of hours searching for comets without success, and was tracking known comets from his driveway in New Mexico when he chanced upon Hale–Bopp just after midnight. The comet had an apparent magnitude of 10.5 and lay near the globular cluster M70 in the constellation of Sagittarius. Hale first established that there was no other deep-sky object near M70, and then consulted a directory of known comets, finding that none were known to be in this area of the sky. Once he had established that the object was moving relative to the background stars, he emailed the Central Bureau for Astronomical Telegrams, the clearing house for astronomical discoveries.
Bopp did not own a telescope. He was out with friends near Stanfield, Arizona, observing star clusters and galaxies when he chanced across the comet while at the eyepiece of his friend's telescope. He realized he might have spotted something new when, like Hale, he checked his star maps to determine if any other deep-sky objects were known to be near M70, and found that there were none. He alerted the Central Bureau for Astronomical Telegrams through a Western Union telegram. Brian G. Marsden, who had run the bureau since 1968, laughed, "Nobody sends telegrams anymore. I mean, by the time that telegram got here, Alan Hale had already e-mailed us three times with updated coordinates."
The following morning, it was confirmed that this was a new comet, and it was given the designation C/1995 O1. The discovery was announced in International Astronomical Union circular 6187.
Hale–Bopp's orbital position was calculated as 7.2 astronomical units (au) from the Sun, placing it between Jupiter and Saturn and by far the greatest distance from Earth at which a comet had been discovered by amateurs. Most comets at this distance are extremely faint, and show no discernible activity, but Hale–Bopp already had an observable coma. A precovery image taken at the Anglo-Australian Telescope in 1993 was found to show the then-unnoticed comet some 13 au from the Sun, a distance at which most comets are essentially unobservable. (Halley's Comet was more than 100 times fainter at the same distance from the Sun.) Analysis indicated later that its comet nucleus was 60±20 kilometres in diameter, approximately six times the size of Halley's Comet.
Its great distance and surprising activity indicated that comet Hale–Bopp might become very bright when it reached perihelion in 1997. However, comet scientists were wary – comets can be extremely unpredictable, and many have large outbursts at great distance only to diminish in brightness later. Comet Kohoutek in 1973 had been touted as a 'comet of the century' and turned out to be unspectacular.
Hale–Bopp became visible to the naked eye in May 1996, and although its rate of brightening slowed considerably during the latter half of that year, scientists were still cautiously optimistic that it would become very bright. It was too closely aligned with the Sun to be observable during December 1996, but when it reappeared in January 1997 it was already bright enough to be seen by anyone who looked for it, even from large cities with light-polluted skies.
The Internet was a growing phenomenon at the time, and numerous websites that tracked the comet's progress and provided daily images from around the world became extremely popular. The Internet played a large role in encouraging the unprecedented public interest in comet Hale–Bopp.
As the comet approached the Sun, it continued to brighten, shining at 2nd magnitude in February, and showing a growing pair of tails, the blue gas tail pointing straight away from the Sun and the yellowish dust tail curving away along its orbit. On March 9, a solar eclipse in China, Mongolia and eastern Siberia allowed observers there to see the comet in the daytime. Hale–Bopp had its closest approach to Earth on March 22, 1997, at a distance of 1.315 au.
As it passed perihelion on April 1, 1997, the comet developed into a spectacular sight. It shone brighter than any star in the sky except Sirius, and its dust tail stretched 40–45 degrees across the sky. The comet was visible well before the sky got fully dark each night, and while many great comets are very close to the Sun as they pass perihelion, comet Hale–Bopp was visible all night to Northern Hemisphere observers.
After its perihelion passage, the comet moved into the southern celestial hemisphere. The comet was much less impressive to southern hemisphere observers than it had been in the northern hemisphere, but southerners were able to see the comet gradually fade from view during the second half of 1997. The last naked-eye observations were reported in December 1997, which meant that the comet had remained visible without aid for 569 days, or about 18 and a half months. The previous record had been set by the Great Comet of 1811, which was visible to the naked eye for about 9 months.
The comet continued to fade as it receded, but was still tracked by astronomers. In October 2007, 10 years after the perihelion and at distance of 25.7 au from Sun, the comet was still active as indicated by the detection of the CO-driven coma. Herschel Space Observatory images taken in 2010 suggest comet Hale–Bopp is covered in a fresh frost layer. Hale–Bopp was again detected in December 2010 when it was 30.7 au away from the Sun, and in 2012, at 33.2 au from the Sun. The James Webb Space Telescope observed Hale–Bopp in 2022, when it was 46.2 au from the Sun.
The comet likely made its previous perihelion 4,200 years ago, in July 2215 BC. The estimated closest approach to Earth was 1.4 au, and it may have been observed in ancient Egypt during the 6th dynasty reign of the Pharaoh Pepi II (Reign: 2247 – c. 2216 BC). Pepi's pyramid at Saqqara contains a text referring to an "nhh-star" as a companion of the pharaoh in the heavens, where "nhh" is the hieroglyph for long hair.
Hale–Bopp may have had a near collision with Jupiter in early June 2215 BC, which probably caused a dramatic change in its orbit, and 2215 BC may have been its first passage through the inner Solar System from the Oort cloud. The comet's current orbit is almost perpendicular to the plane of the ecliptic, so further close approaches to planets will be rare. However, in April 1996 the comet passed within 0.77 au of Jupiter, close enough for its orbit to be measurably affected by the planet's gravity. The comet's orbit was shortened considerably to a period of roughly 2,399 years, and it will next return to the inner Solar System around the year 4385. Its greatest distance from the Sun (aphelion) will be about 354 au, reduced from about 525 au.
The estimated probability of Hale-Bopp's striking Earth in future passages through the inner Solar System is remote, about 2.5×10 per orbit. However, given that the comet nucleus is around 60 km in diameter, the consequences of such an impact would be apocalyptic. Weissman conservatively estimates the diameter at 35 km; an estimated density of 0.6 g/cm then gives a cometary mass of 1.3×10 g. At a probable impact velocity of 52.5 km/s, impact energy can be calculated as 1.9×10 ergs, or 4.4×10 megatons, about 44 times the estimated energy of the K-T impact event.
Over many orbits, the cumulative effect of gravitational perturbations on comets with high orbital inclinations and small perihelion distances is generally to reduce the perihelion distance to very small values. Hale–Bopp has about a 15% chance of eventually becoming a sungrazing comet through this process. If such is the case, it could undergo huge mass loss, or break up into smaller pieces like the Kreutz sungrazers. It would also be extremely bright, due to a combination of closeness to the Sun and nuclei size, potentially exceeding Halley’s Comet in 837 AD.
Due to the massive size of its nucleus, Comet Hale–Bopp was observed intensively by astronomers during its perihelion passage, and several important advances in cometary science resulted from these observations. The dust production rate of the comet was very high (up to 2.0×10 kg/s), which may have made the inner coma optically thick. Based on the properties of the dust grains—high temperature, high albedo and strong 10 μm silicate emission feature—the astronomers concluded the dust grains are smaller than observed in any other comet.
Hale–Bopp showed the highest ever linear polarization detected for any comet. Such polarization is the result of solar radiation getting scattered by the dust particles in the coma of the comet and depends on the nature of the grains. It further confirms that the dust grains in the coma of comet Hale–Bopp were smaller than inferred in any other comet.
One of the most remarkable discoveries was that the comet had a third type of tail. In addition to the well-known gas and dust tails, Hale–Bopp also exhibited a faint sodium tail, only visible with powerful instruments with dedicated filters. Sodium emission had been previously observed in other comets, but had not been shown to come from a tail. Hale–Bopp's sodium tail consisted of neutral atoms (not ions), and extended to some 50 million kilometres in length.
The source of the sodium appeared to be the inner coma, although not necessarily the nucleus. There are several possible mechanisms for generating a source of sodium atoms, including collisions between dust grains surrounding the nucleus, and "sputtering" of sodium from dust grains by ultraviolet light. It is not yet established which mechanism is primarily responsible for creating Hale–Bopp's sodium tail, and the narrow and diffuse components of the tail may have different origins.
While the comet's dust tail roughly followed the path of the comet's orbit and the gas tail pointed almost directly away from the Sun, the sodium tail appeared to lie between the two. This implies that the sodium atoms are driven away from the comet's head by radiation pressure.
The abundance of deuterium in comet Hale–Bopp in the form of heavy water was found to be about twice that of Earth's oceans. If Hale–Bopp's deuterium abundance is typical of all comets, this implies that although cometary impacts are thought to be the source of a significant amount of the water on Earth, they cannot be the only source.
Deuterium was also detected in many other hydrogen compounds in the comet. The ratio of deuterium to normal hydrogen was found to vary from compound to compound, which astronomers believe suggests that cometary ices were formed in interstellar clouds, rather than in the solar nebula. Theoretical modelling of ice formation in interstellar clouds suggests that comet Hale–Bopp formed at temperatures of around 25–45 kelvins.
Spectroscopic observations of Hale–Bopp revealed the presence of many organic chemicals, several of which had never been detected in comets before. These complex molecules may exist within the cometary nucleus, or might be synthesised by reactions in the comet.
Hale–Bopp was the first comet where the noble gas argon was detected. Noble gases are chemically inert and vary from low to high volatility. Since different noble elements have different sublimation temperatures, and don't interact with other elements, they can be used for probing the temperature histories of the cometary ices. Krypton has a sublimation temperature of 16–20 K and was found to be depleted more than 25 times relative to the solar abundance, while argon with its higher sublimation temperature was enriched relative to the solar abundance. Together these observations indicate that the interior of Hale–Bopp has always been colder than 35–40 K, but has at some point been warmer than 20 K. Unless the solar nebula was much colder and richer in argon than generally believed, this suggests that the comet formed beyond Neptune in the Kuiper belt region and then migrated outward to the Oort cloud.
Comet Hale–Bopp's activity and outgassing were not spread uniformly over its nucleus, but instead came from several specific jets. Observations of the material streaming away from these jets allowed astronomers to measure the rotation period of the comet, which was found to be about 11 hours 46 minutes.
In 1997 a paper was published that hypothesised the existence of a binary nucleus to fully explain the observed pattern of comet Hale–Bopp's dust emission observed in October 1995. The paper was based on theoretical analysis, and did not claim an observational detection of the proposed satellite nucleus, but estimated that it would have a diameter of about 30 km, with the main nucleus being about 70 km across, and would orbit in about three days at a distance of about 180 km. This analysis was confirmed by observations in 1996 using Wide-Field Planetary Camera 2 of the Hubble Space Telescope which had taken images of the comet that revealed the satellite.
Although observations using adaptive optics in late 1997 and early 1998 showed a double peak in the brightness of the nucleus, controversy still exists over whether such observations can only be explained by a binary nucleus. The discovery of the satellite was not confirmed by other observations. Also, while comets have been observed to break up before, no case had been found of a stable binary nucleus until the subsequent discovery of P/2006 VW139.
In November 1996, amateur astronomer Chuck Shramek of Houston, Texas took a CCD image of the comet which showed a fuzzy, slightly elongated object nearby. His computer sky-viewing program did not identify the star, so Shramek called the Art Bell radio program Coast to Coast AM to announce that he had discovered a "Saturn-like object" following Hale–Bopp. UFO enthusiasts, such as remote viewing proponent and Emory University political science professor Courtney Brown, soon concluded that there was an alien spacecraft following the comet.
Several astronomers, including Alan Hale, stated that the object was simply the 8.5-magnitude star SAO141894. They noted that the star did not appear on Shramek's computer program because the user preferences were set incorrectly. Art Bell claimed to have obtained an image of the object from an anonymous astrophysicist who was about to confirm its discovery. However, astronomers Olivier Hainaut and David Tholen of the University of Hawaii stated that the alleged photo was an altered copy of one of their own comet images.
Thirty-nine members of the Heaven's Gate cult committed mass suicide in March 1997 with the intention of teleporting to a spaceship which they believed was flying behind the comet.
Nancy Lieder, who claims to receive messages from aliens through an implant in her brain, stated that Hale–Bopp was a fiction designed to distract the population from the coming arrival of "Nibiru" or "Planet X", a giant planet whose close passage would disrupt the Earth's rotation, causing global cataclysm. Her original date for the apocalypse was May 2003, which passed without incident, but various conspiracy websites continued to predict the coming of Nibiru, most of whom tied it to the 2012 phenomenon. Lieder and others' claims of the planet Nibiru have been repeatedly debunked by scientists.
Its lengthy period of visibility and extensive coverage in the media meant that Hale–Bopp was probably the most-observed comet in history, making a far greater impact on the general public than the return of Halley's Comet in 1986, and certainly seen by a greater number of people than witnessed any of Halley's previous appearances. For instance, 69% of Americans had seen Hale–Bopp by April 9, 1997.
Hale–Bopp was a record-breaking comet—the farthest comet from the Sun discovered by amateurs, with the largest well-measured cometary nucleus known after 95P/Chiron, and it was visible to the naked eye for twice as long as the previous record-holder. It was also brighter than magnitude 0 for eight weeks, longer than any other recorded comet.
Carolyn Shoemaker and her husband Gene, both famous for co-discovering comet Shoemaker–Levy 9, were involved in a car crash after photographing the comet. Gene died in the crash and his ashes were sent to the Moon aboard NASA's Lunar Prospector mission along with an image of Hale–Bopp, "the last comet that the Shoemakers observed together". | [
{
"paragraph_id": 0,
"text": "Comet Hale–Bopp (formally designated C/1995 O1) is a comet that was one of the most widely observed of the 20th century and one of the brightest seen for many decades.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Alan Hale and Thomas Bopp discovered Comet Hale–Bopp separately on July 23, 1995, before it became visible to the naked eye. It is difficult to predict the maximum brightness of new comets with any degree of certainty, but Hale–Bopp exceeded most predictions when it passed perihelion on April 1, 1997, reaching about magnitude −1.8. It was visible to the naked eye for a record 18 months, due to its massive nucleus size. This is twice as long as the Great Comet of 1811, the previous record holder. Accordingly, Hale–Bopp was dubbed the great comet of 1997.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The comet was discovered independently on July 23, 1995, by two observers, Alan Hale and Thomas Bopp, both in the United States.",
"title": "Discovery"
},
{
"paragraph_id": 3,
"text": "Hale had spent many hundreds of hours searching for comets without success, and was tracking known comets from his driveway in New Mexico when he chanced upon Hale–Bopp just after midnight. The comet had an apparent magnitude of 10.5 and lay near the globular cluster M70 in the constellation of Sagittarius. Hale first established that there was no other deep-sky object near M70, and then consulted a directory of known comets, finding that none were known to be in this area of the sky. Once he had established that the object was moving relative to the background stars, he emailed the Central Bureau for Astronomical Telegrams, the clearing house for astronomical discoveries.",
"title": "Discovery"
},
{
"paragraph_id": 4,
"text": "Bopp did not own a telescope. He was out with friends near Stanfield, Arizona, observing star clusters and galaxies when he chanced across the comet while at the eyepiece of his friend's telescope. He realized he might have spotted something new when, like Hale, he checked his star maps to determine if any other deep-sky objects were known to be near M70, and found that there were none. He alerted the Central Bureau for Astronomical Telegrams through a Western Union telegram. Brian G. Marsden, who had run the bureau since 1968, laughed, \"Nobody sends telegrams anymore. I mean, by the time that telegram got here, Alan Hale had already e-mailed us three times with updated coordinates.\"",
"title": "Discovery"
},
{
"paragraph_id": 5,
"text": "The following morning, it was confirmed that this was a new comet, and it was given the designation C/1995 O1. The discovery was announced in International Astronomical Union circular 6187.",
"title": "Discovery"
},
{
"paragraph_id": 6,
"text": "Hale–Bopp's orbital position was calculated as 7.2 astronomical units (au) from the Sun, placing it between Jupiter and Saturn and by far the greatest distance from Earth at which a comet had been discovered by amateurs. Most comets at this distance are extremely faint, and show no discernible activity, but Hale–Bopp already had an observable coma. A precovery image taken at the Anglo-Australian Telescope in 1993 was found to show the then-unnoticed comet some 13 au from the Sun, a distance at which most comets are essentially unobservable. (Halley's Comet was more than 100 times fainter at the same distance from the Sun.) Analysis indicated later that its comet nucleus was 60±20 kilometres in diameter, approximately six times the size of Halley's Comet.",
"title": "Early observation"
},
{
"paragraph_id": 7,
"text": "Its great distance and surprising activity indicated that comet Hale–Bopp might become very bright when it reached perihelion in 1997. However, comet scientists were wary – comets can be extremely unpredictable, and many have large outbursts at great distance only to diminish in brightness later. Comet Kohoutek in 1973 had been touted as a 'comet of the century' and turned out to be unspectacular.",
"title": "Early observation"
},
{
"paragraph_id": 8,
"text": "Hale–Bopp became visible to the naked eye in May 1996, and although its rate of brightening slowed considerably during the latter half of that year, scientists were still cautiously optimistic that it would become very bright. It was too closely aligned with the Sun to be observable during December 1996, but when it reappeared in January 1997 it was already bright enough to be seen by anyone who looked for it, even from large cities with light-polluted skies.",
"title": "Perihelion"
},
{
"paragraph_id": 9,
"text": "The Internet was a growing phenomenon at the time, and numerous websites that tracked the comet's progress and provided daily images from around the world became extremely popular. The Internet played a large role in encouraging the unprecedented public interest in comet Hale–Bopp.",
"title": "Perihelion"
},
{
"paragraph_id": 10,
"text": "As the comet approached the Sun, it continued to brighten, shining at 2nd magnitude in February, and showing a growing pair of tails, the blue gas tail pointing straight away from the Sun and the yellowish dust tail curving away along its orbit. On March 9, a solar eclipse in China, Mongolia and eastern Siberia allowed observers there to see the comet in the daytime. Hale–Bopp had its closest approach to Earth on March 22, 1997, at a distance of 1.315 au.",
"title": "Perihelion"
},
{
"paragraph_id": 11,
"text": "As it passed perihelion on April 1, 1997, the comet developed into a spectacular sight. It shone brighter than any star in the sky except Sirius, and its dust tail stretched 40–45 degrees across the sky. The comet was visible well before the sky got fully dark each night, and while many great comets are very close to the Sun as they pass perihelion, comet Hale–Bopp was visible all night to Northern Hemisphere observers.",
"title": "Perihelion"
},
{
"paragraph_id": 12,
"text": "After its perihelion passage, the comet moved into the southern celestial hemisphere. The comet was much less impressive to southern hemisphere observers than it had been in the northern hemisphere, but southerners were able to see the comet gradually fade from view during the second half of 1997. The last naked-eye observations were reported in December 1997, which meant that the comet had remained visible without aid for 569 days, or about 18 and a half months. The previous record had been set by the Great Comet of 1811, which was visible to the naked eye for about 9 months.",
"title": "After perihelion"
},
{
"paragraph_id": 13,
"text": "The comet continued to fade as it receded, but was still tracked by astronomers. In October 2007, 10 years after the perihelion and at distance of 25.7 au from Sun, the comet was still active as indicated by the detection of the CO-driven coma. Herschel Space Observatory images taken in 2010 suggest comet Hale–Bopp is covered in a fresh frost layer. Hale–Bopp was again detected in December 2010 when it was 30.7 au away from the Sun, and in 2012, at 33.2 au from the Sun. The James Webb Space Telescope observed Hale–Bopp in 2022, when it was 46.2 au from the Sun.",
"title": "After perihelion"
},
{
"paragraph_id": 14,
"text": "The comet likely made its previous perihelion 4,200 years ago, in July 2215 BC. The estimated closest approach to Earth was 1.4 au, and it may have been observed in ancient Egypt during the 6th dynasty reign of the Pharaoh Pepi II (Reign: 2247 – c. 2216 BC). Pepi's pyramid at Saqqara contains a text referring to an \"nhh-star\" as a companion of the pharaoh in the heavens, where \"nhh\" is the hieroglyph for long hair.",
"title": "Orbital changes"
},
{
"paragraph_id": 15,
"text": "Hale–Bopp may have had a near collision with Jupiter in early June 2215 BC, which probably caused a dramatic change in its orbit, and 2215 BC may have been its first passage through the inner Solar System from the Oort cloud. The comet's current orbit is almost perpendicular to the plane of the ecliptic, so further close approaches to planets will be rare. However, in April 1996 the comet passed within 0.77 au of Jupiter, close enough for its orbit to be measurably affected by the planet's gravity. The comet's orbit was shortened considerably to a period of roughly 2,399 years, and it will next return to the inner Solar System around the year 4385. Its greatest distance from the Sun (aphelion) will be about 354 au, reduced from about 525 au.",
"title": "Orbital changes"
},
{
"paragraph_id": 16,
"text": "The estimated probability of Hale-Bopp's striking Earth in future passages through the inner Solar System is remote, about 2.5×10 per orbit. However, given that the comet nucleus is around 60 km in diameter, the consequences of such an impact would be apocalyptic. Weissman conservatively estimates the diameter at 35 km; an estimated density of 0.6 g/cm then gives a cometary mass of 1.3×10 g. At a probable impact velocity of 52.5 km/s, impact energy can be calculated as 1.9×10 ergs, or 4.4×10 megatons, about 44 times the estimated energy of the K-T impact event.",
"title": "Orbital changes"
},
{
"paragraph_id": 17,
"text": "Over many orbits, the cumulative effect of gravitational perturbations on comets with high orbital inclinations and small perihelion distances is generally to reduce the perihelion distance to very small values. Hale–Bopp has about a 15% chance of eventually becoming a sungrazing comet through this process. If such is the case, it could undergo huge mass loss, or break up into smaller pieces like the Kreutz sungrazers. It would also be extremely bright, due to a combination of closeness to the Sun and nuclei size, potentially exceeding Halley’s Comet in 837 AD.",
"title": "Orbital changes"
},
{
"paragraph_id": 18,
"text": "Due to the massive size of its nucleus, Comet Hale–Bopp was observed intensively by astronomers during its perihelion passage, and several important advances in cometary science resulted from these observations. The dust production rate of the comet was very high (up to 2.0×10 kg/s), which may have made the inner coma optically thick. Based on the properties of the dust grains—high temperature, high albedo and strong 10 μm silicate emission feature—the astronomers concluded the dust grains are smaller than observed in any other comet.",
"title": "Scientific results"
},
{
"paragraph_id": 19,
"text": "Hale–Bopp showed the highest ever linear polarization detected for any comet. Such polarization is the result of solar radiation getting scattered by the dust particles in the coma of the comet and depends on the nature of the grains. It further confirms that the dust grains in the coma of comet Hale–Bopp were smaller than inferred in any other comet.",
"title": "Scientific results"
},
{
"paragraph_id": 20,
"text": "One of the most remarkable discoveries was that the comet had a third type of tail. In addition to the well-known gas and dust tails, Hale–Bopp also exhibited a faint sodium tail, only visible with powerful instruments with dedicated filters. Sodium emission had been previously observed in other comets, but had not been shown to come from a tail. Hale–Bopp's sodium tail consisted of neutral atoms (not ions), and extended to some 50 million kilometres in length.",
"title": "Scientific results"
},
{
"paragraph_id": 21,
"text": "The source of the sodium appeared to be the inner coma, although not necessarily the nucleus. There are several possible mechanisms for generating a source of sodium atoms, including collisions between dust grains surrounding the nucleus, and \"sputtering\" of sodium from dust grains by ultraviolet light. It is not yet established which mechanism is primarily responsible for creating Hale–Bopp's sodium tail, and the narrow and diffuse components of the tail may have different origins.",
"title": "Scientific results"
},
{
"paragraph_id": 22,
"text": "While the comet's dust tail roughly followed the path of the comet's orbit and the gas tail pointed almost directly away from the Sun, the sodium tail appeared to lie between the two. This implies that the sodium atoms are driven away from the comet's head by radiation pressure.",
"title": "Scientific results"
},
{
"paragraph_id": 23,
"text": "The abundance of deuterium in comet Hale–Bopp in the form of heavy water was found to be about twice that of Earth's oceans. If Hale–Bopp's deuterium abundance is typical of all comets, this implies that although cometary impacts are thought to be the source of a significant amount of the water on Earth, they cannot be the only source.",
"title": "Scientific results"
},
{
"paragraph_id": 24,
"text": "Deuterium was also detected in many other hydrogen compounds in the comet. The ratio of deuterium to normal hydrogen was found to vary from compound to compound, which astronomers believe suggests that cometary ices were formed in interstellar clouds, rather than in the solar nebula. Theoretical modelling of ice formation in interstellar clouds suggests that comet Hale–Bopp formed at temperatures of around 25–45 kelvins.",
"title": "Scientific results"
},
{
"paragraph_id": 25,
"text": "Spectroscopic observations of Hale–Bopp revealed the presence of many organic chemicals, several of which had never been detected in comets before. These complex molecules may exist within the cometary nucleus, or might be synthesised by reactions in the comet.",
"title": "Scientific results"
},
{
"paragraph_id": 26,
"text": "Hale–Bopp was the first comet where the noble gas argon was detected. Noble gases are chemically inert and vary from low to high volatility. Since different noble elements have different sublimation temperatures, and don't interact with other elements, they can be used for probing the temperature histories of the cometary ices. Krypton has a sublimation temperature of 16–20 K and was found to be depleted more than 25 times relative to the solar abundance, while argon with its higher sublimation temperature was enriched relative to the solar abundance. Together these observations indicate that the interior of Hale–Bopp has always been colder than 35–40 K, but has at some point been warmer than 20 K. Unless the solar nebula was much colder and richer in argon than generally believed, this suggests that the comet formed beyond Neptune in the Kuiper belt region and then migrated outward to the Oort cloud.",
"title": "Scientific results"
},
{
"paragraph_id": 27,
"text": "Comet Hale–Bopp's activity and outgassing were not spread uniformly over its nucleus, but instead came from several specific jets. Observations of the material streaming away from these jets allowed astronomers to measure the rotation period of the comet, which was found to be about 11 hours 46 minutes.",
"title": "Scientific results"
},
{
"paragraph_id": 28,
"text": "In 1997 a paper was published that hypothesised the existence of a binary nucleus to fully explain the observed pattern of comet Hale–Bopp's dust emission observed in October 1995. The paper was based on theoretical analysis, and did not claim an observational detection of the proposed satellite nucleus, but estimated that it would have a diameter of about 30 km, with the main nucleus being about 70 km across, and would orbit in about three days at a distance of about 180 km. This analysis was confirmed by observations in 1996 using Wide-Field Planetary Camera 2 of the Hubble Space Telescope which had taken images of the comet that revealed the satellite.",
"title": "Scientific results"
},
{
"paragraph_id": 29,
"text": "Although observations using adaptive optics in late 1997 and early 1998 showed a double peak in the brightness of the nucleus, controversy still exists over whether such observations can only be explained by a binary nucleus. The discovery of the satellite was not confirmed by other observations. Also, while comets have been observed to break up before, no case had been found of a stable binary nucleus until the subsequent discovery of P/2006 VW139.",
"title": "Scientific results"
},
{
"paragraph_id": 30,
"text": "In November 1996, amateur astronomer Chuck Shramek of Houston, Texas took a CCD image of the comet which showed a fuzzy, slightly elongated object nearby. His computer sky-viewing program did not identify the star, so Shramek called the Art Bell radio program Coast to Coast AM to announce that he had discovered a \"Saturn-like object\" following Hale–Bopp. UFO enthusiasts, such as remote viewing proponent and Emory University political science professor Courtney Brown, soon concluded that there was an alien spacecraft following the comet.",
"title": "UFO claims"
},
{
"paragraph_id": 31,
"text": "Several astronomers, including Alan Hale, stated that the object was simply the 8.5-magnitude star SAO141894. They noted that the star did not appear on Shramek's computer program because the user preferences were set incorrectly. Art Bell claimed to have obtained an image of the object from an anonymous astrophysicist who was about to confirm its discovery. However, astronomers Olivier Hainaut and David Tholen of the University of Hawaii stated that the alleged photo was an altered copy of one of their own comet images.",
"title": "UFO claims"
},
{
"paragraph_id": 32,
"text": "Thirty-nine members of the Heaven's Gate cult committed mass suicide in March 1997 with the intention of teleporting to a spaceship which they believed was flying behind the comet.",
"title": "UFO claims"
},
{
"paragraph_id": 33,
"text": "Nancy Lieder, who claims to receive messages from aliens through an implant in her brain, stated that Hale–Bopp was a fiction designed to distract the population from the coming arrival of \"Nibiru\" or \"Planet X\", a giant planet whose close passage would disrupt the Earth's rotation, causing global cataclysm. Her original date for the apocalypse was May 2003, which passed without incident, but various conspiracy websites continued to predict the coming of Nibiru, most of whom tied it to the 2012 phenomenon. Lieder and others' claims of the planet Nibiru have been repeatedly debunked by scientists.",
"title": "UFO claims"
},
{
"paragraph_id": 34,
"text": "Its lengthy period of visibility and extensive coverage in the media meant that Hale–Bopp was probably the most-observed comet in history, making a far greater impact on the general public than the return of Halley's Comet in 1986, and certainly seen by a greater number of people than witnessed any of Halley's previous appearances. For instance, 69% of Americans had seen Hale–Bopp by April 9, 1997.",
"title": "Legacy"
},
{
"paragraph_id": 35,
"text": "Hale–Bopp was a record-breaking comet—the farthest comet from the Sun discovered by amateurs, with the largest well-measured cometary nucleus known after 95P/Chiron, and it was visible to the naked eye for twice as long as the previous record-holder. It was also brighter than magnitude 0 for eight weeks, longer than any other recorded comet.",
"title": "Legacy"
},
{
"paragraph_id": 36,
"text": "Carolyn Shoemaker and her husband Gene, both famous for co-discovering comet Shoemaker–Levy 9, were involved in a car crash after photographing the comet. Gene died in the crash and his ashes were sent to the Moon aboard NASA's Lunar Prospector mission along with an image of Hale–Bopp, \"the last comet that the Shoemakers observed together\".",
"title": "Legacy"
}
] | Comet Hale–Bopp is a comet that was one of the most widely observed of the 20th century and one of the brightest seen for many decades. Alan Hale and Thomas Bopp discovered Comet Hale–Bopp separately on July 23, 1995, before it became visible to the naked eye. It is difficult to predict the maximum brightness of new comets with any degree of certainty, but Hale–Bopp exceeded most predictions when it passed perihelion on April 1, 1997, reaching about magnitude −1.8. It was visible to the naked eye for a record 18 months, due to its massive nucleus size. This is twice as long as the Great Comet of 1811, the previous record holder. Accordingly, Hale–Bopp was dubbed the great comet of 1997. | 2001-11-24T19:15:41Z | 2023-12-28T03:32:52Z | [
"Template:Featured article",
"Template:Infobox Comet",
"Template:Multiple image",
"Template:Mp",
"Template:Main",
"Template:Clear",
"Template:Cite journal",
"Template:Cite magazine",
"Template:Authority control",
"Template:Redirect",
"Template:Use British English",
"Template:Lang",
"Template:ISBN",
"Template:Commons category",
"Template:JPL Small Body",
"Template:Use mdy dates",
"Template:E",
"Template:Cite web",
"Template:Cite news",
"Template:Comets",
"Template:Short description",
"Template:Reflist",
"Template:Citation",
"Template:Cite book",
"Template:Portal bar"
] | https://en.wikipedia.org/wiki/Comet_Hale%E2%80%93Bopp |
7,230 | Conspiracy | A conspiracy, also known as a plot, is a secret plan or agreement between people (called conspirers or conspirators) for an unlawful or harmful purpose, such as murder, treason, or corruption, especially with political motivation, while keeping their agreement secret from the public or from other people affected by it. In a political sense, conspiracy refers to a group of people united in the goal of usurping, altering or overthrowing an established political power. Depending on the circumstances, a conspiracy may also be a crime, or a civil wrong. The term generally connotes, or implies, wrongdoing or illegality on the part of the conspirators, as it is commonly believed that people would not need to conspire to engage in activities that were lawful and ethical, or to which no one would object.
There are some coordinated activities that people engage in with secrecy that are not generally thought of as conspiracies. For example, intelligence agencies such as the American CIA and the British MI6 necessarily make plans in secret to spy on suspected enemies of their respective countries and the general populace of its home countries, but this kind of activity is generally not considered to be a conspiracy so long as their goal is to fulfill their official functions, and not something like improperly enriching themselves. Similarly, the coaches of competing sports teams routinely meet behind closed doors to plan game strategies and specific plays designed to defeat their opponents, but this activity is not considered a conspiracy because this is considered a legitimate part of the sport. Furthermore, a conspiracy must be engaged in knowingly. The continuation of social traditions that work to the advantage of certain groups and to the disadvantage of certain other groups, though possibly unethical, is not a conspiracy if participants in the practice are not carrying it forward for the purpose of perpetuating this advantage.
On the other hand, if the intent of carrying out a conspiracy exists, then there is a conspiracy even if the details are never agreed to aloud by the participants. CIA covert operations, for instance, are by their very nature hard to prove definitively, but research into the agency's work, as well as revelations by former CIA employees, has suggested several cases where the agency tried to influence events. During the Cold War, the United States tried to covertly change other nations' governments 66 times, succeeding in 26 cases.
A "conspiracy theory" is a belief that a conspiracy has actually been decisive in producing a political event of which the theorists strongly disapprove. Political scientist Michael Barkun has described conspiracy theories as relying on the view that the universe is governed by design, and embody three principles: nothing happens by accident, nothing is as it seems, and everything is connected. Another common feature is that conspiracy theories evolve to incorporate whatever evidence exists against them, so that they become, as Barkun writes, a closed system that is unfalsifiable, and therefore "a matter of faith rather than proof."
Conspiracy comes from the Latin word conspiratio. While conspiratio can mean "plot" or "conspiracy", it can also be translated as "unity" and "agreement", in the context of a group. Conspiratio comes from conspiro which, while still meaning "conspiracy" in the modern sense, also means "I sing in unison", as con- means "with" or "together", and spiro means "I breathe", literally meaning "I breathe together with others". | [
{
"paragraph_id": 0,
"text": "A conspiracy, also known as a plot, is a secret plan or agreement between people (called conspirers or conspirators) for an unlawful or harmful purpose, such as murder, treason, or corruption, especially with political motivation, while keeping their agreement secret from the public or from other people affected by it. In a political sense, conspiracy refers to a group of people united in the goal of usurping, altering or overthrowing an established political power. Depending on the circumstances, a conspiracy may also be a crime, or a civil wrong. The term generally connotes, or implies, wrongdoing or illegality on the part of the conspirators, as it is commonly believed that people would not need to conspire to engage in activities that were lawful and ethical, or to which no one would object.",
"title": ""
},
{
"paragraph_id": 1,
"text": "There are some coordinated activities that people engage in with secrecy that are not generally thought of as conspiracies. For example, intelligence agencies such as the American CIA and the British MI6 necessarily make plans in secret to spy on suspected enemies of their respective countries and the general populace of its home countries, but this kind of activity is generally not considered to be a conspiracy so long as their goal is to fulfill their official functions, and not something like improperly enriching themselves. Similarly, the coaches of competing sports teams routinely meet behind closed doors to plan game strategies and specific plays designed to defeat their opponents, but this activity is not considered a conspiracy because this is considered a legitimate part of the sport. Furthermore, a conspiracy must be engaged in knowingly. The continuation of social traditions that work to the advantage of certain groups and to the disadvantage of certain other groups, though possibly unethical, is not a conspiracy if participants in the practice are not carrying it forward for the purpose of perpetuating this advantage.",
"title": ""
},
{
"paragraph_id": 2,
"text": "On the other hand, if the intent of carrying out a conspiracy exists, then there is a conspiracy even if the details are never agreed to aloud by the participants. CIA covert operations, for instance, are by their very nature hard to prove definitively, but research into the agency's work, as well as revelations by former CIA employees, has suggested several cases where the agency tried to influence events. During the Cold War, the United States tried to covertly change other nations' governments 66 times, succeeding in 26 cases.",
"title": ""
},
{
"paragraph_id": 3,
"text": "A \"conspiracy theory\" is a belief that a conspiracy has actually been decisive in producing a political event of which the theorists strongly disapprove. Political scientist Michael Barkun has described conspiracy theories as relying on the view that the universe is governed by design, and embody three principles: nothing happens by accident, nothing is as it seems, and everything is connected. Another common feature is that conspiracy theories evolve to incorporate whatever evidence exists against them, so that they become, as Barkun writes, a closed system that is unfalsifiable, and therefore \"a matter of faith rather than proof.\"",
"title": ""
},
{
"paragraph_id": 4,
"text": "Conspiracy comes from the Latin word conspiratio. While conspiratio can mean \"plot\" or \"conspiracy\", it can also be translated as \"unity\" and \"agreement\", in the context of a group. Conspiratio comes from conspiro which, while still meaning \"conspiracy\" in the modern sense, also means \"I sing in unison\", as con- means \"with\" or \"together\", and spiro means \"I breathe\", literally meaning \"I breathe together with others\".",
"title": "Etymology"
}
] | A conspiracy, also known as a plot, is a secret plan or agreement between people for an unlawful or harmful purpose, such as murder, treason, or corruption, especially with political motivation, while keeping their agreement secret from the public or from other people affected by it. In a political sense, conspiracy refers to a group of people united in the goal of usurping, altering or overthrowing an established political power. Depending on the circumstances, a conspiracy may also be a crime, or a civil wrong. The term generally connotes, or implies, wrongdoing or illegality on the part of the conspirators, as it is commonly believed that people would not need to conspire to engage in activities that were lawful and ethical, or to which no one would object. There are some coordinated activities that people engage in with secrecy that are not generally thought of as conspiracies. For example, intelligence agencies such as the American CIA and the British MI6 necessarily make plans in secret to spy on suspected enemies of their respective countries and the general populace of its home countries, but this kind of activity is generally not considered to be a conspiracy so long as their goal is to fulfill their official functions, and not something like improperly enriching themselves. Similarly, the coaches of competing sports teams routinely meet behind closed doors to plan game strategies and specific plays designed to defeat their opponents, but this activity is not considered a conspiracy because this is considered a legitimate part of the sport. Furthermore, a conspiracy must be engaged in knowingly. The continuation of social traditions that work to the advantage of certain groups and to the disadvantage of certain other groups, though possibly unethical, is not a conspiracy if participants in the practice are not carrying it forward for the purpose of perpetuating this advantage. On the other hand, if the intent of carrying out a conspiracy exists, then there is a conspiracy even if the details are never agreed to aloud by the participants. CIA covert operations, for instance, are by their very nature hard to prove definitively, but research into the agency's work, as well as revelations by former CIA employees, has suggested several cases where the agency tried to influence events. During the Cold War, the United States tried to covertly change other nations' governments 66 times, succeeding in 26 cases. A "conspiracy theory" is a belief that a conspiracy has actually been decisive in producing a political event of which the theorists strongly disapprove. Political scientist Michael Barkun has described conspiracy theories as relying on the view that the universe is governed by design, and embody three principles: nothing happens by accident, nothing is as it seems, and everything is connected. Another common feature is that conspiracy theories evolve to incorporate whatever evidence exists against them, so that they become, as Barkun writes, a closed system that is unfalsifiable, and therefore "a matter of faith rather than proof." | 2001-11-25T02:39:24Z | 2023-12-12T02:23:17Z | [
"Template:Reflist",
"Template:Cite book",
"Template:Commons category",
"Template:Authority control",
"Template:Short description",
"Template:Hatgrp",
"Template:Sfn",
"Template:Cite web",
"Template:Wikiquote-inline"
] | https://en.wikipedia.org/wiki/Conspiracy |
7,232 | Cholistan Desert | The Cholistan Desert (Urdu: صحرائے چولستان; Punjabi: چولستان روہی), also locally known as Rohi (روہی), is a desert in the southern part of Punjab, Pakistan that forms part of the Greater Thar Desert, which extends to Sindh province and the Indian state of Rajasthan. It is one of two large deserts in Punjab, the other being the Thal Desert. The name is derived from the Turkic word chol, meaning "sands," and istan, a Persian suffix meaning "land of."
Cholistan was a center for caravan trade, leading to the construction of numerous forts in the medieval period to protect trade routes - of which the Derawar Fort is the best-preserved example.
Cholistan covers an area of 25,800 km (10,000 sq mi) in the Bahawalpur, Bahawalnagar, and Rahim Yar Khan districts of southern Punjab. The nearest major city is Bahawalpur city, 30 km (19 mi) from the edge of the desert. The desert stretches about 480 kilometres in length, with a width varying between 32 and 192 kilometres. It is located between 27°42΄00΄΄ to 29° 45΄00΄΄ north, and 69°57' 30'′ to 72° 52' 30'′ east. 81% of the desert is sandy, while 19% is characterized by alluvial flats and small sandy dunes. The entire region is subject to desertification due to poor vegetation cover resulting in wind erosion.
Cholistan's climate is characterized as an arid and semi-arid Tropical desert, with very low annual humidity. The mean temperature in Cholistan is 28.33 °C (82.99 °F), with the hottest month being July with a mean temperature of 38.5 °C (101.3 °F). Summer temperatures can surpass 46 °C (115 °F), and sometimes rise over 50 °C (122 °F) during periods of drought. Winter temperatures occasionally dip to 0 °C (32 °F). Average rainfall in Cholistan is up to 180mm, with July and August being the wettest months, although droughts are common. Water is collected seasonally in a system of natural pools called Toba, or manmade pools called Kund. Subsoil water is found at a depth of 30–40 meters, but is typically brackish, and unsuitable for most plant growth.
In May 2022, in the desert areas of Cholistan in Pakistan many cattle died due to extreme heat and water shortage. Shepherds, including cattle, have started migrating from water-scarce areas. Toba Salem Sar and Toba Nawa Kahu were the worst affected areas where 50 sheep died due to lack of water while more data is being collected from the affected areas.
Cholistan was formed during the Pleistocene period. Geologically, Cholistan is divided into the Greater Cholistan and Lesser Cholistan, which are roughly divided by the dry bed of the ancient Hakra River. Greater Cholistan is a mostly sandy area in the south and west part of the desert up to the border with India, and covers an area of 13,600 km (5,300 sq mi). Sand dunes in this area reach over 100 meters in height. Soil in the region is also highly saline. Lesser Cholistan is an arid and slightly less sandy region approximately 12,370 km (4,780 sq mi) in area which extends north and east from the old Hakra river bed, historically up to the banks of the Sutlej River.
Soil quality is generally poor with little organic matter in the Greater Cholistan, and compacted alluvial clays in the Lesser Cholistan. A canal system built during the British era led to irrigation of the northern part of Lesser Cholistan.
Though now an arid region, Cholistan once had a large river flowing through it that was formed by the waters of the Sutlej and Yamuna Rivers. The dry bed of the Hakra River runs through the area, along which many settlements of the Indus Valley civilization/Harappan culture have been discovered, including the large urban site of Ganweriwal. The river system supported settlements in the region between 4000 BCE and 600 BCE when the river changed course. The river carried significant amounts of water, and flowed until at least where Derawar Fort is now located.
Over 400 Harappan sites had been listed in Cholistan in the 1970s, with a further 37 added in the 1990s. The high density of settlements in Cholistan suggest it may have been one of the most productive regions of the Indus Valley Civilization. In the post-Harappan period, Cholistan was part of the Cemetery H culture which grew as a surviving regional variant of the Harappan culture, which was then followed by the Painted Grey Ware culture.
The region became a center for caravan trade, leading to the construction of a dense network of forts in the medieval period - of which the Derawar Fort is the best-preserved example. Other large forts in Cholistan include Meergarh, Jaangarh, Marotgarh, Maujgarh, Dingarh, Khangarh, Khairgarh, Bijnotgarh and Islamgarh - with the suffix "garh" denoting "fort." These forts are part of the Tentative List of UNESCO World Heritage Sites, and run roughly parallel to the Indus and Sutlej Rivers 40 miles to the south. Smaller forts in the area include Bara, Bhagla, Duheinwala, Falji, Kandera, Liara, Murid, Machki, Nawankot, and Phulra forts.
The backbone of Cholistan economy is animal rearing. Few other livelihood opportunities aside from livestock farming are available in the region. Agricultural farming away from the irrigated regions in Lower Cholistan is difficult due to the lack of steady water supply.
Camels in particular are prized in Cholistan for their meat and milk, use as transportation, and for entertainment such as racing and camel dancing. Two types of camels are found in Cholistan: Marrecha, or Mahra, is used for transportation or racing/dancing. Berella is used for milk production, and can produce 10–15 liters of milk per day per animal.
Livestock holds much importance for meeting the area's major needs for cottage industry as well as providing milk, meat and fat. Because of the nomadic way of life, the main wealth of the people are their cattle that are bred for sale, milked or shorn for their wool. Moreover, isolated as they were, they had to depend upon themselves for all their needs like food, clothing, and items of daily use. So all their crafts initially stemmed from necessity but later on they started exporting their goods to the other places as well. The estimated number of livestock in the desert areas is 1.6 million.
Cholistan produces a very superior type of carpet wool compared to that produced in other parts of Pakistan. From this wool they knit beautiful carpets, rugs, and other woolen items. This includes blankets, which is also a local necessity for the desert as it is not always dust and heat, but winter nights here are very cold too, usually below the freezing point. Khes and pattu are also manufactured with wool or cotton. Khes is a form of blanket with a field of black white and pattu has a white ground base. Cholistan is now selling the wool for it brings maximum profit.
It may be mentioned that cotton textiles have always been a hallmark craft of the Indus Valley civilization. Various kinds of khaddar-cloth are made for local consumption, and fine khaddar bedclothes and coarse lungies are woven here. A beautiful cloth called Sufi is also woven of silk and cotton, or with cotton wrap and silk wool. Gargas are made with numerous patterns and color, having complicated embroidery, mirror, and patchwork. Ajrak is another specialty of Cholistan. It is a special and delicate printing technique on both sides of the cloth in indigo blue and red patterns covering the base cloth. Cotton turbans and shawls are also made here. Chunri is another form of dopattas, having innumerable colors and patterns like dots, squares, and circles on it.
As per the 1998 Census of Pakistan, a total of 128,019 people, with a 2015 estimate of 229,071, with 70% living in Lesser Cholistan. The average household size is 6.65.
As mentioned above, the Indus Valley has always been occupied by the wandering nomadic tribes who are fond of isolated areas, as such areas allow them to lead life free of foreign intrusion, enabling them to establish their own individual and unique cultures. Cholistan till the era of Mughal rule had also been isolated from outside influence. During the rule of Mughal Emperor Akbar, it became a proper productive unit. The entire area was ruled by a host of kings who securely guarded their frontiers. The rulers were the great patrons of art, and the various crafts underwent a simultaneous and parallel development, influencing each other. Masons, stone carvers, artisans, artists, and designers started rebuilding the old cities and new sites, and with that flourished new courts, paintings, weaving, and pottery. The fields of architecture, sculpture, terra cotta, and pottery developed greatly in this phase.
Camels are highly valued by the desert dwellers. Camels are not only useful for transportation and loading purposes, but its skin and wool are also quite valuable. Camel wool is spun and woven into beautiful woolen blankets known as falsies and into stylish and durable rugs. The camel's leather is also utilized in making caps, goblets, and expensive lampshades.
Leather work is another important local cottage industry due to the large number of livestock here. Other than the products mentioned above, Khusa (shoes) is a specialty of this area. Cholistani khusas are very famous for the quality of workmanship, variety, and richness of designs especially when stitched and embroidered with golden or brightly-colored threads.
The people of Cholistan are fond of jewelry, especially gold jewelry. The chief ornaments made and worn by them are Nath (nosegay), Katmala (necklace) Kangan (bracelet), and Pazeb (anklets). Gold and silver bangles are also a product of Cholistan. The locals similarly work in enamel, producing enamel buttons, earrings, bangles, and rings.
Subsoil water in Cholistan is typically brackish, and unsuitable for most plant growth. Native trees, shrubs, and grasses are drought tolerant. There are 131 plant species in Cholistan from 89 genera and 24 families. Most common of them are below;
A man-made forest called Dingarh was developed by the Pakistan Council of Research in Water Resources (PCRWR) on more than 100 ha. Dunes were fixed and stabilized by mechanical and vegetative means, and the area is now covered with trees with orchards of zizyphus, date palms, and grassland grown with collected rainwater and saline groundwater.
The wildlife of Cholistan desert mostly consists of migratory birds, especially the Houbara bustard who migrates to this part during winter. This species of bird is most famous in the hunting season, even though they are endangered in Pakistan (vulnerable globally), according to the IUCN Red List. Their population has decreased from 4,746 in 2001 to just a few dozens in recent times. In December 2016, a Qatari prince had his hunting license rejected due to the species being endangered. Another prince, Dr. Fahad was fined with Rs. 80,000 ($760) and all of the birds he caught were set free for hunting without permit and license. A few endangered species in this desert are the Chinkara Antelope, Great Indian Bustard, and Blue Bull, etc. Their population of Chinkara has decreased from 3,000 in 2007 to just a little above 1,000 in 2010 due to non-permit hunting of the species by influential political families.
The Indus civilization was one of the earliest centres of pottery, and thus the pottery of Cholistan has a long history. Local soil is very fine and suitable for making pottery. The fineness of the earth can be observed on the Kacha houses which are actually plastered with mud but look like they have been white washed. The chief Cholistani ceramic articles are their surahies, piyalas, and glasses, remarkable for their lightness and fine finishing.
In earlier times, only the art of pottery and terracotta developed, but from the seventh century onwards, a large number of temples and images were also built on account of the intensified religious passions and the accumulation of wealth in cities.
28°30′N 70°00′E / 28.500°N 70.000°E / 28.500; 70.000 | [
{
"paragraph_id": 0,
"text": "The Cholistan Desert (Urdu: صحرائے چولستان; Punjabi: چولستان روہی), also locally known as Rohi (روہی), is a desert in the southern part of Punjab, Pakistan that forms part of the Greater Thar Desert, which extends to Sindh province and the Indian state of Rajasthan. It is one of two large deserts in Punjab, the other being the Thal Desert. The name is derived from the Turkic word chol, meaning \"sands,\" and istan, a Persian suffix meaning \"land of.\"",
"title": ""
},
{
"paragraph_id": 1,
"text": "Cholistan was a center for caravan trade, leading to the construction of numerous forts in the medieval period to protect trade routes - of which the Derawar Fort is the best-preserved example.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Cholistan covers an area of 25,800 km (10,000 sq mi) in the Bahawalpur, Bahawalnagar, and Rahim Yar Khan districts of southern Punjab. The nearest major city is Bahawalpur city, 30 km (19 mi) from the edge of the desert. The desert stretches about 480 kilometres in length, with a width varying between 32 and 192 kilometres. It is located between 27°42΄00΄΄ to 29° 45΄00΄΄ north, and 69°57' 30'′ to 72° 52' 30'′ east. 81% of the desert is sandy, while 19% is characterized by alluvial flats and small sandy dunes. The entire region is subject to desertification due to poor vegetation cover resulting in wind erosion.",
"title": "Geography"
},
{
"paragraph_id": 3,
"text": "Cholistan's climate is characterized as an arid and semi-arid Tropical desert, with very low annual humidity. The mean temperature in Cholistan is 28.33 °C (82.99 °F), with the hottest month being July with a mean temperature of 38.5 °C (101.3 °F). Summer temperatures can surpass 46 °C (115 °F), and sometimes rise over 50 °C (122 °F) during periods of drought. Winter temperatures occasionally dip to 0 °C (32 °F). Average rainfall in Cholistan is up to 180mm, with July and August being the wettest months, although droughts are common. Water is collected seasonally in a system of natural pools called Toba, or manmade pools called Kund. Subsoil water is found at a depth of 30–40 meters, but is typically brackish, and unsuitable for most plant growth.",
"title": "Geography"
},
{
"paragraph_id": 4,
"text": "In May 2022, in the desert areas of Cholistan in Pakistan many cattle died due to extreme heat and water shortage. Shepherds, including cattle, have started migrating from water-scarce areas. Toba Salem Sar and Toba Nawa Kahu were the worst affected areas where 50 sheep died due to lack of water while more data is being collected from the affected areas.",
"title": "Geography"
},
{
"paragraph_id": 5,
"text": "Cholistan was formed during the Pleistocene period. Geologically, Cholistan is divided into the Greater Cholistan and Lesser Cholistan, which are roughly divided by the dry bed of the ancient Hakra River. Greater Cholistan is a mostly sandy area in the south and west part of the desert up to the border with India, and covers an area of 13,600 km (5,300 sq mi). Sand dunes in this area reach over 100 meters in height. Soil in the region is also highly saline. Lesser Cholistan is an arid and slightly less sandy region approximately 12,370 km (4,780 sq mi) in area which extends north and east from the old Hakra river bed, historically up to the banks of the Sutlej River.",
"title": "Geology"
},
{
"paragraph_id": 6,
"text": "Soil quality is generally poor with little organic matter in the Greater Cholistan, and compacted alluvial clays in the Lesser Cholistan. A canal system built during the British era led to irrigation of the northern part of Lesser Cholistan.",
"title": "Geology"
},
{
"paragraph_id": 7,
"text": "Though now an arid region, Cholistan once had a large river flowing through it that was formed by the waters of the Sutlej and Yamuna Rivers. The dry bed of the Hakra River runs through the area, along which many settlements of the Indus Valley civilization/Harappan culture have been discovered, including the large urban site of Ganweriwal. The river system supported settlements in the region between 4000 BCE and 600 BCE when the river changed course. The river carried significant amounts of water, and flowed until at least where Derawar Fort is now located.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "Over 400 Harappan sites had been listed in Cholistan in the 1970s, with a further 37 added in the 1990s. The high density of settlements in Cholistan suggest it may have been one of the most productive regions of the Indus Valley Civilization. In the post-Harappan period, Cholistan was part of the Cemetery H culture which grew as a surviving regional variant of the Harappan culture, which was then followed by the Painted Grey Ware culture.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "The region became a center for caravan trade, leading to the construction of a dense network of forts in the medieval period - of which the Derawar Fort is the best-preserved example. Other large forts in Cholistan include Meergarh, Jaangarh, Marotgarh, Maujgarh, Dingarh, Khangarh, Khairgarh, Bijnotgarh and Islamgarh - with the suffix \"garh\" denoting \"fort.\" These forts are part of the Tentative List of UNESCO World Heritage Sites, and run roughly parallel to the Indus and Sutlej Rivers 40 miles to the south. Smaller forts in the area include Bara, Bhagla, Duheinwala, Falji, Kandera, Liara, Murid, Machki, Nawankot, and Phulra forts.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "The backbone of Cholistan economy is animal rearing. Few other livelihood opportunities aside from livestock farming are available in the region. Agricultural farming away from the irrigated regions in Lower Cholistan is difficult due to the lack of steady water supply.",
"title": "Economy"
},
{
"paragraph_id": 11,
"text": "Camels in particular are prized in Cholistan for their meat and milk, use as transportation, and for entertainment such as racing and camel dancing. Two types of camels are found in Cholistan: Marrecha, or Mahra, is used for transportation or racing/dancing. Berella is used for milk production, and can produce 10–15 liters of milk per day per animal.",
"title": "Economy"
},
{
"paragraph_id": 12,
"text": "Livestock holds much importance for meeting the area's major needs for cottage industry as well as providing milk, meat and fat. Because of the nomadic way of life, the main wealth of the people are their cattle that are bred for sale, milked or shorn for their wool. Moreover, isolated as they were, they had to depend upon themselves for all their needs like food, clothing, and items of daily use. So all their crafts initially stemmed from necessity but later on they started exporting their goods to the other places as well. The estimated number of livestock in the desert areas is 1.6 million.",
"title": "Economy"
},
{
"paragraph_id": 13,
"text": "Cholistan produces a very superior type of carpet wool compared to that produced in other parts of Pakistan. From this wool they knit beautiful carpets, rugs, and other woolen items. This includes blankets, which is also a local necessity for the desert as it is not always dust and heat, but winter nights here are very cold too, usually below the freezing point. Khes and pattu are also manufactured with wool or cotton. Khes is a form of blanket with a field of black white and pattu has a white ground base. Cholistan is now selling the wool for it brings maximum profit.",
"title": "Economy"
},
{
"paragraph_id": 14,
"text": "It may be mentioned that cotton textiles have always been a hallmark craft of the Indus Valley civilization. Various kinds of khaddar-cloth are made for local consumption, and fine khaddar bedclothes and coarse lungies are woven here. A beautiful cloth called Sufi is also woven of silk and cotton, or with cotton wrap and silk wool. Gargas are made with numerous patterns and color, having complicated embroidery, mirror, and patchwork. Ajrak is another specialty of Cholistan. It is a special and delicate printing technique on both sides of the cloth in indigo blue and red patterns covering the base cloth. Cotton turbans and shawls are also made here. Chunri is another form of dopattas, having innumerable colors and patterns like dots, squares, and circles on it.",
"title": "Economy"
},
{
"paragraph_id": 15,
"text": "As per the 1998 Census of Pakistan, a total of 128,019 people, with a 2015 estimate of 229,071, with 70% living in Lesser Cholistan. The average household size is 6.65.",
"title": "People"
},
{
"paragraph_id": 16,
"text": "As mentioned above, the Indus Valley has always been occupied by the wandering nomadic tribes who are fond of isolated areas, as such areas allow them to lead life free of foreign intrusion, enabling them to establish their own individual and unique cultures. Cholistan till the era of Mughal rule had also been isolated from outside influence. During the rule of Mughal Emperor Akbar, it became a proper productive unit. The entire area was ruled by a host of kings who securely guarded their frontiers. The rulers were the great patrons of art, and the various crafts underwent a simultaneous and parallel development, influencing each other. Masons, stone carvers, artisans, artists, and designers started rebuilding the old cities and new sites, and with that flourished new courts, paintings, weaving, and pottery. The fields of architecture, sculpture, terra cotta, and pottery developed greatly in this phase.",
"title": "People"
},
{
"paragraph_id": 17,
"text": "Camels are highly valued by the desert dwellers. Camels are not only useful for transportation and loading purposes, but its skin and wool are also quite valuable. Camel wool is spun and woven into beautiful woolen blankets known as falsies and into stylish and durable rugs. The camel's leather is also utilized in making caps, goblets, and expensive lampshades.",
"title": "People"
},
{
"paragraph_id": 18,
"text": "Leather work is another important local cottage industry due to the large number of livestock here. Other than the products mentioned above, Khusa (shoes) is a specialty of this area. Cholistani khusas are very famous for the quality of workmanship, variety, and richness of designs especially when stitched and embroidered with golden or brightly-colored threads.",
"title": "People"
},
{
"paragraph_id": 19,
"text": "The people of Cholistan are fond of jewelry, especially gold jewelry. The chief ornaments made and worn by them are Nath (nosegay), Katmala (necklace) Kangan (bracelet), and Pazeb (anklets). Gold and silver bangles are also a product of Cholistan. The locals similarly work in enamel, producing enamel buttons, earrings, bangles, and rings.",
"title": "People"
},
{
"paragraph_id": 20,
"text": "Subsoil water in Cholistan is typically brackish, and unsuitable for most plant growth. Native trees, shrubs, and grasses are drought tolerant. There are 131 plant species in Cholistan from 89 genera and 24 families. Most common of them are below;",
"title": "Ecology"
},
{
"paragraph_id": 21,
"text": "A man-made forest called Dingarh was developed by the Pakistan Council of Research in Water Resources (PCRWR) on more than 100 ha. Dunes were fixed and stabilized by mechanical and vegetative means, and the area is now covered with trees with orchards of zizyphus, date palms, and grassland grown with collected rainwater and saline groundwater.",
"title": "Ecology"
},
{
"paragraph_id": 22,
"text": "The wildlife of Cholistan desert mostly consists of migratory birds, especially the Houbara bustard who migrates to this part during winter. This species of bird is most famous in the hunting season, even though they are endangered in Pakistan (vulnerable globally), according to the IUCN Red List. Their population has decreased from 4,746 in 2001 to just a few dozens in recent times. In December 2016, a Qatari prince had his hunting license rejected due to the species being endangered. Another prince, Dr. Fahad was fined with Rs. 80,000 ($760) and all of the birds he caught were set free for hunting without permit and license. A few endangered species in this desert are the Chinkara Antelope, Great Indian Bustard, and Blue Bull, etc. Their population of Chinkara has decreased from 3,000 in 2007 to just a little above 1,000 in 2010 due to non-permit hunting of the species by influential political families.",
"title": "Ecology"
},
{
"paragraph_id": 23,
"text": "The Indus civilization was one of the earliest centres of pottery, and thus the pottery of Cholistan has a long history. Local soil is very fine and suitable for making pottery. The fineness of the earth can be observed on the Kacha houses which are actually plastered with mud but look like they have been white washed. The chief Cholistani ceramic articles are their surahies, piyalas, and glasses, remarkable for their lightness and fine finishing.",
"title": "Terracotta"
},
{
"paragraph_id": 24,
"text": "In earlier times, only the art of pottery and terracotta developed, but from the seventh century onwards, a large number of temples and images were also built on account of the intensified religious passions and the accumulation of wealth in cities.",
"title": "Terracotta"
},
{
"paragraph_id": 25,
"text": "28°30′N 70°00′E / 28.500°N 70.000°E / 28.500; 70.000",
"title": "External links"
}
] | The Cholistan Desert, also locally known as Rohi (روہی), is a desert in the southern part of Punjab, Pakistan that forms part of the Greater Thar Desert, which extends to Sindh province and the Indian state of Rajasthan. It is one of two large deserts in Punjab, the other being the Thal Desert. The name is derived from the Turkic word chol, meaning "sands," and istan, a Persian suffix meaning "land of." Cholistan was a center for caravan trade, leading to the construction of numerous forts in the medieval period to protect trade routes - of which the Derawar Fort is the best-preserved example. | 2001-11-25T19:35:54Z | 2023-12-27T11:31:26Z | [
"Template:Reflist",
"Template:PunjabGeography",
"Template:Nastaliq",
"Template:Convert",
"Template:See also",
"Template:Commons category",
"Template:Neighbourhoods of Bahawalpur",
"Template:Authority control",
"Template:Coord",
"Template:Which lang",
"Template:Cite book",
"Template:Cite web",
"Template:Short description",
"Template:Use dmy dates",
"Template:Infobox valley",
"Template:Lang-ur",
"Template:Cite journal",
"Template:Deserts"
] | https://en.wikipedia.org/wiki/Cholistan_Desert |
7,233 | Causantín mac Cináeda | Causantín mac Cináeda (Modern Gaelic: Còiseam mac Choinnich; died 877) was a king of the Picts. He is often known as Constantine I in reference to his place in modern lists of Scottish monarchs, but contemporary sources described Causantín only as a Pictish king. A son of Cináed mac Ailpín ("Kenneth MacAlpin"), he succeeded his uncle Domnall mac Ailpín as Pictish king following the latter's death on 13 April 862. It is likely that Causantín's reign witnessed increased activity by Vikings, based in Ireland, Northumbria and northern Britain. He died fighting one such invasion.
Very few records of ninth century events in northern Britain survive. The main local source from the period is the Chronicle of the Kings of Alba, a list of kings from Cináed mac Ailpín (died 858) to Cináed mac Maíl Coluim (died 995). The list survives in the Poppleton Manuscript, a thirteenth-century compilation. Originally simply a list of kings with reign lengths, the other details contained in the Poppleton Manuscript version were added from the tenth century onwards. In addition to this, later king lists survive. The earliest genealogical records of the descendants of Cináed mac Ailpín may date from the end of the tenth century, but their value lies more in their context, and the information they provide about the interests of those for whom they were compiled, than in the unreliable claims they contain. The Pictish king-lists originally ended with this Causantín, who was reckoned the seventieth and last king of the Picts.
For narrative history the principal sources are the Anglo-Saxon Chronicle and the Irish annals. While Scandinavian sagas describe events in 9th century Britain, their value as sources of historical narrative, rather than documents of social history, is disputed. If the sources for north-eastern Britain, the lands of the kingdom of Northumbria and the former Pictland, are limited and late, those for the areas on the Irish Sea and Atlantic coasts—the modern regions of north-west England and all of northern and western Scotland—are non-existent, and archaeology and toponymy are of primary importance.
Writing a century before Causantín was born, Bede recorded five languages in Britain. Latin, the common language of the church; Old English, the language of the Angles and Saxons; Irish, spoken on the western coasts of Britain and in Ireland; Brythonic, ancestor of the Welsh language, spoken in large parts of western Britain; and Pictish, spoken in northern Britain. By the ninth century a sixth language, Old Norse, had arrived with the Vikings.
Viking activity in northern Britain appears to have reached a peak during Causantín's reign. Viking armies were led by a small group of men who may have been kinsmen. Among those noted by the Irish annals, the Chronicle of the Kings of Alba and the Anglo-Saxon Chronicle are Ívarr—Ímar in Irish sources—who was active from East Anglia to Ireland, Halfdán—Albdann in Irish, Healfdene in Old English— and Amlaíb or Óláfr. As well as these leaders, various others related to them appear in the surviving record.
Viking activity in Britain increased in 865 when the Great Heathen Army, probably a part of the forces which had been active in Francia, landed in East Anglia. The following year, having obtained tribute from the East Anglian King Edmund, the Great Army moved north, seizing York, chief city of the Northumbrians. The Great Army defeated an attack on York by the two rivals for the Northumbrian throne, Osberht and Ælla, who had put aside their differences in the face of a common enemy. Both would-be kings were killed in the failed assault, probably on 21 March 867. Following this, the leaders of the Great Army are said to have installed one Ecgberht as king of the Northumbrians. Their next target was Mercia where King Burgred, aided by his brother-in-law King Æthelred of Wessex, drove them off.
While the kingdoms of East Anglia, Mercia and Northumbria were under attack, other Viking armies were active in the far north. Amlaíb and Auisle (Ásl or Auðgísl), said to be his brother, brought an army to Fortriu and obtained tribute and hostages in 866. Historians disagree as to whether the army returned to Ireland in 866, 867 or even in 869. Late sources of uncertain reliability state that Auisle was killed by Amlaíb in 867 in a dispute over Amlaíb's wife, the daughter of Cináed. It is unclear whether, if accurate, this woman should be identified as a daughter of Cináed mac Ailpín, and thus Causantín's sister, or as a daughter of Cináed mac Conaing, king of Brega. While Amlaíb and Auisle were in north Britain, the Annals of Ulster record that Áed Findliath, High King of Ireland, took advantage of their absence to destroy the longphorts along the northern coasts of Ireland. Áed Findliath was married to Causantín's sister Máel Muire. She later married Áed's successor Flann Sinna. Her death is recorded in 913.
In 870, Amlaíb and Ívarr attacked Dumbarton Rock, where the River Leven meets the River Clyde, the chief place of the kingdom of Alt Clut, south-western neighbour of Pictland. The siege lasted four months before the fortress fell to the Vikings who returned to Ireland with many prisoners, "Angles, Britons and Picts", in 871. Archaeological evidence suggests that Dumbarton Rock was largely abandoned and that Govan replaced it as the chief place of the kingdom of Strathclyde, as Alt Clut was later known. King Artgal of Alt Clut did not long survive these events, being killed "at the instigation" of Causantín son of Cináed two years later. Artgal's son and successor Run was married to a sister of Causantín.
Amlaíb disappears from Irish annals after his return to Ireland in 871. According to the Chronicle of the Kings of Alba he was killed by Causantín either in 871 or 872 when he returned to Pictland to collect further tribute. His ally Ívarr died in 873.
In 875, the Chronicle and the Annals of Ulster again report a Viking army in Pictland; the Annals of Ulster say that "a great slaughter of the Picts resulted". No name is given to the battle in which the slaughter occurred, yet the Chronicle notes a battle fought between Danes and Scots near Dollar but notes a subsequent "annihilation" at Atholl. In 877, shortly after building a new church for the Culdees at St Andrews, Causantín was captured and executed (or perhaps killed in battle) after defending against Viking raiders. Although there is agreement on the time and general manner of his death, it is not clear where this happened. Some believe he was beheaded on a Fife beach, following a battle at Fife Ness, near Crail. William Forbes Skene reads the Chronicle as placing Causantín's death at Inverdovat (by Newport-on-Tay), which appears to match the Prophecy of Berchán. The account in the Chronicle of Melrose names the place as the "Black Cave," and John of Fordun calls it the "Black Den". Causantín was buried on Iona.
Causantín's son Domnall and his descendants represented the main line of the kings of Alba and later Scotland. | [
{
"paragraph_id": 0,
"text": "Causantín mac Cináeda (Modern Gaelic: Còiseam mac Choinnich; died 877) was a king of the Picts. He is often known as Constantine I in reference to his place in modern lists of Scottish monarchs, but contemporary sources described Causantín only as a Pictish king. A son of Cináed mac Ailpín (\"Kenneth MacAlpin\"), he succeeded his uncle Domnall mac Ailpín as Pictish king following the latter's death on 13 April 862. It is likely that Causantín's reign witnessed increased activity by Vikings, based in Ireland, Northumbria and northern Britain. He died fighting one such invasion.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Very few records of ninth century events in northern Britain survive. The main local source from the period is the Chronicle of the Kings of Alba, a list of kings from Cináed mac Ailpín (died 858) to Cináed mac Maíl Coluim (died 995). The list survives in the Poppleton Manuscript, a thirteenth-century compilation. Originally simply a list of kings with reign lengths, the other details contained in the Poppleton Manuscript version were added from the tenth century onwards. In addition to this, later king lists survive. The earliest genealogical records of the descendants of Cináed mac Ailpín may date from the end of the tenth century, but their value lies more in their context, and the information they provide about the interests of those for whom they were compiled, than in the unreliable claims they contain. The Pictish king-lists originally ended with this Causantín, who was reckoned the seventieth and last king of the Picts.",
"title": "Sources"
},
{
"paragraph_id": 2,
"text": "For narrative history the principal sources are the Anglo-Saxon Chronicle and the Irish annals. While Scandinavian sagas describe events in 9th century Britain, their value as sources of historical narrative, rather than documents of social history, is disputed. If the sources for north-eastern Britain, the lands of the kingdom of Northumbria and the former Pictland, are limited and late, those for the areas on the Irish Sea and Atlantic coasts—the modern regions of north-west England and all of northern and western Scotland—are non-existent, and archaeology and toponymy are of primary importance.",
"title": "Sources"
},
{
"paragraph_id": 3,
"text": "Writing a century before Causantín was born, Bede recorded five languages in Britain. Latin, the common language of the church; Old English, the language of the Angles and Saxons; Irish, spoken on the western coasts of Britain and in Ireland; Brythonic, ancestor of the Welsh language, spoken in large parts of western Britain; and Pictish, spoken in northern Britain. By the ninth century a sixth language, Old Norse, had arrived with the Vikings.",
"title": "Languages and names"
},
{
"paragraph_id": 4,
"text": "Viking activity in northern Britain appears to have reached a peak during Causantín's reign. Viking armies were led by a small group of men who may have been kinsmen. Among those noted by the Irish annals, the Chronicle of the Kings of Alba and the Anglo-Saxon Chronicle are Ívarr—Ímar in Irish sources—who was active from East Anglia to Ireland, Halfdán—Albdann in Irish, Healfdene in Old English— and Amlaíb or Óláfr. As well as these leaders, various others related to them appear in the surviving record.",
"title": "Amlaíb and Ímar"
},
{
"paragraph_id": 5,
"text": "Viking activity in Britain increased in 865 when the Great Heathen Army, probably a part of the forces which had been active in Francia, landed in East Anglia. The following year, having obtained tribute from the East Anglian King Edmund, the Great Army moved north, seizing York, chief city of the Northumbrians. The Great Army defeated an attack on York by the two rivals for the Northumbrian throne, Osberht and Ælla, who had put aside their differences in the face of a common enemy. Both would-be kings were killed in the failed assault, probably on 21 March 867. Following this, the leaders of the Great Army are said to have installed one Ecgberht as king of the Northumbrians. Their next target was Mercia where King Burgred, aided by his brother-in-law King Æthelred of Wessex, drove them off.",
"title": "Amlaíb and Ímar"
},
{
"paragraph_id": 6,
"text": "While the kingdoms of East Anglia, Mercia and Northumbria were under attack, other Viking armies were active in the far north. Amlaíb and Auisle (Ásl or Auðgísl), said to be his brother, brought an army to Fortriu and obtained tribute and hostages in 866. Historians disagree as to whether the army returned to Ireland in 866, 867 or even in 869. Late sources of uncertain reliability state that Auisle was killed by Amlaíb in 867 in a dispute over Amlaíb's wife, the daughter of Cináed. It is unclear whether, if accurate, this woman should be identified as a daughter of Cináed mac Ailpín, and thus Causantín's sister, or as a daughter of Cináed mac Conaing, king of Brega. While Amlaíb and Auisle were in north Britain, the Annals of Ulster record that Áed Findliath, High King of Ireland, took advantage of their absence to destroy the longphorts along the northern coasts of Ireland. Áed Findliath was married to Causantín's sister Máel Muire. She later married Áed's successor Flann Sinna. Her death is recorded in 913.",
"title": "Amlaíb and Ímar"
},
{
"paragraph_id": 7,
"text": "In 870, Amlaíb and Ívarr attacked Dumbarton Rock, where the River Leven meets the River Clyde, the chief place of the kingdom of Alt Clut, south-western neighbour of Pictland. The siege lasted four months before the fortress fell to the Vikings who returned to Ireland with many prisoners, \"Angles, Britons and Picts\", in 871. Archaeological evidence suggests that Dumbarton Rock was largely abandoned and that Govan replaced it as the chief place of the kingdom of Strathclyde, as Alt Clut was later known. King Artgal of Alt Clut did not long survive these events, being killed \"at the instigation\" of Causantín son of Cináed two years later. Artgal's son and successor Run was married to a sister of Causantín.",
"title": "Amlaíb and Ímar"
},
{
"paragraph_id": 8,
"text": "Amlaíb disappears from Irish annals after his return to Ireland in 871. According to the Chronicle of the Kings of Alba he was killed by Causantín either in 871 or 872 when he returned to Pictland to collect further tribute. His ally Ívarr died in 873.",
"title": "Amlaíb and Ímar"
},
{
"paragraph_id": 9,
"text": "In 875, the Chronicle and the Annals of Ulster again report a Viking army in Pictland; the Annals of Ulster say that \"a great slaughter of the Picts resulted\". No name is given to the battle in which the slaughter occurred, yet the Chronicle notes a battle fought between Danes and Scots near Dollar but notes a subsequent \"annihilation\" at Atholl. In 877, shortly after building a new church for the Culdees at St Andrews, Causantín was captured and executed (or perhaps killed in battle) after defending against Viking raiders. Although there is agreement on the time and general manner of his death, it is not clear where this happened. Some believe he was beheaded on a Fife beach, following a battle at Fife Ness, near Crail. William Forbes Skene reads the Chronicle as placing Causantín's death at Inverdovat (by Newport-on-Tay), which appears to match the Prophecy of Berchán. The account in the Chronicle of Melrose names the place as the \"Black Cave,\" and John of Fordun calls it the \"Black Den\". Causantín was buried on Iona.",
"title": "Last days of the Pictish kingdom"
},
{
"paragraph_id": 10,
"text": "Causantín's son Domnall and his descendants represented the main line of the kings of Alba and later Scotland.",
"title": "Aftermath"
}
] | Causantín mac Cináeda was a king of the Picts. He is often known as Constantine I in reference to his place in modern lists of Scottish monarchs, but contemporary sources described Causantín only as a Pictish king. A son of Cináed mac Ailpín, he succeeded his uncle Domnall mac Ailpín as Pictish king following the latter's death on 13 April 862. It is likely that Causantín's reign witnessed increased activity by Vikings, based in Ireland, Northumbria and northern Britain. He died fighting one such invasion. | 2002-02-25T15:51:15Z | 2023-08-05T19:58:42Z | [
"Template:Efn",
"Template:S-hou",
"Template:S-ttl",
"Template:Refend",
"Template:S-reg",
"Template:Reflist",
"Template:Cite web",
"Template:Refbegin",
"Template:S-bef",
"Template:S-aft",
"Template:Pictish and Scottish monarchs",
"Template:English, Scottish and British monarchs",
"Template:Lang",
"Template:Citation",
"Template:Cite ODNB",
"Template:S-start",
"Template:S-end",
"Template:Authority control",
"Template:Use dmy dates",
"Template:Infobox royalty",
"Template:Notelist"
] | https://en.wikipedia.org/wiki/Causant%C3%ADn_mac_Cin%C3%A1eda |
7,234 | Constantine II (emperor) | Constantine II (Latin: Flavius Claudius Constantinus; February 316 – 340) was Roman emperor from 337 to 340. Son of Constantine the Great and co-emperor alongside his brothers, his attempt to exert his perceived rights of primogeniture led to his death in a failed invasion of Italy in 340.
The eldest son of Constantine the Great and Fausta, Constantine II was born in Arles in February 316 and raised as a Christian.
On 1 March 317, he was made caesar. In 323, at the age of seven, he took part in his father's campaign against the Sarmatians. At age ten, he became commander of Gaul, following the death of his half-brother Crispus. An inscription dating to 330 records the title of Alamannicus, so it is probable that his generals won a victory over the Alamanni. His military career continued when Constantine I made him field commander during the 332 winter campaign against the Goths. The military operation was successful and decisive, with 100,000 Goths reportedly slain and the surrender of the ruler Ariaric.
Following the death of his father in 337, Constantine II initially became emperor jointly with his brothers Constantius II and Constans, with the empire divided between them and their cousins, the caesars Dalmatius and Hannibalianus. This arrangement barely survived Constantine I's death, as his sons arranged the slaughter of most of the rest of the family by the army. As a result, the three brothers gathered together in Pannonia and there, on 9 September 337, divided the Roman world among themselves. Constantine, proclaimed Augustus by the troops received Gaul, Britannia and Hispania.
He was soon involved in the struggle between factions rupturing the unity of the Christian Church. The Western portion of the empire, under the influence of the Popes in Rome, favoured Nicene Christianity over Arianism, and through their intercession they convinced Constantine to free Athanasius, allowing him to return to Alexandria. This action aggravated Constantius II, who was a committed supporter of Arianism.
Constantine was initially the guardian of his younger brother Constans, whose portion of the empire was Italia, Africa and Illyricum. Constantine soon complained that he had not received the amount of territory that was his due as the eldest son. Annoyed that Constans had received Thrace and Macedonia after the death of Dalmatius, Constantine demanded that Constans hand over the African provinces, to which he agreed in order to maintain a fragile peace. Soon, however, they began quarreling over which parts of the African provinces belonged to Carthage, and thus Constantine, and which belonged to Italy, and therefore Constans.
Further complications arose when Constans came of age and Constantine, who had grown accustomed to dominating his younger brother, would not relinquish the guardianship. In 340 Constantine marched into Italy at the head of his troops to claim territory from Constans. Constans, at that time in Dacia, detached and sent a select and disciplined body of his Illyrian troops, stating that he would follow them in person with the remainder of his forces. Constantine was engaged in military operations and was killed by Constans's generals in an ambush outside Aquileia. Constans then took control of his deceased brother's realm. | [
{
"paragraph_id": 0,
"text": "Constantine II (Latin: Flavius Claudius Constantinus; February 316 – 340) was Roman emperor from 337 to 340. Son of Constantine the Great and co-emperor alongside his brothers, his attempt to exert his perceived rights of primogeniture led to his death in a failed invasion of Italy in 340.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The eldest son of Constantine the Great and Fausta, Constantine II was born in Arles in February 316 and raised as a Christian.",
"title": "Career"
},
{
"paragraph_id": 2,
"text": "On 1 March 317, he was made caesar. In 323, at the age of seven, he took part in his father's campaign against the Sarmatians. At age ten, he became commander of Gaul, following the death of his half-brother Crispus. An inscription dating to 330 records the title of Alamannicus, so it is probable that his generals won a victory over the Alamanni. His military career continued when Constantine I made him field commander during the 332 winter campaign against the Goths. The military operation was successful and decisive, with 100,000 Goths reportedly slain and the surrender of the ruler Ariaric.",
"title": "Career"
},
{
"paragraph_id": 3,
"text": "Following the death of his father in 337, Constantine II initially became emperor jointly with his brothers Constantius II and Constans, with the empire divided between them and their cousins, the caesars Dalmatius and Hannibalianus. This arrangement barely survived Constantine I's death, as his sons arranged the slaughter of most of the rest of the family by the army. As a result, the three brothers gathered together in Pannonia and there, on 9 September 337, divided the Roman world among themselves. Constantine, proclaimed Augustus by the troops received Gaul, Britannia and Hispania.",
"title": "Career"
},
{
"paragraph_id": 4,
"text": "He was soon involved in the struggle between factions rupturing the unity of the Christian Church. The Western portion of the empire, under the influence of the Popes in Rome, favoured Nicene Christianity over Arianism, and through their intercession they convinced Constantine to free Athanasius, allowing him to return to Alexandria. This action aggravated Constantius II, who was a committed supporter of Arianism.",
"title": "Career"
},
{
"paragraph_id": 5,
"text": "Constantine was initially the guardian of his younger brother Constans, whose portion of the empire was Italia, Africa and Illyricum. Constantine soon complained that he had not received the amount of territory that was his due as the eldest son. Annoyed that Constans had received Thrace and Macedonia after the death of Dalmatius, Constantine demanded that Constans hand over the African provinces, to which he agreed in order to maintain a fragile peace. Soon, however, they began quarreling over which parts of the African provinces belonged to Carthage, and thus Constantine, and which belonged to Italy, and therefore Constans.",
"title": "Career"
},
{
"paragraph_id": 6,
"text": "Further complications arose when Constans came of age and Constantine, who had grown accustomed to dominating his younger brother, would not relinquish the guardianship. In 340 Constantine marched into Italy at the head of his troops to claim territory from Constans. Constans, at that time in Dacia, detached and sent a select and disciplined body of his Illyrian troops, stating that he would follow them in person with the remainder of his forces. Constantine was engaged in military operations and was killed by Constans's generals in an ambush outside Aquileia. Constans then took control of his deceased brother's realm.",
"title": "Career"
}
] | Constantine II was Roman emperor from 337 to 340. Son of Constantine the Great and co-emperor alongside his brothers, his attempt to exert his perceived rights of primogeniture led to his death in a failed invasion of Italy in 340. | 2001-11-25T23:57:38Z | 2023-12-21T19:57:02Z | [
"Template:Cite book",
"Template:Commons-inline",
"Template:S-end",
"Template:Reflist",
"Template:Chart top",
"Template:Tree chart/start",
"Template:Smallcaps",
"Template:S-aft",
"Template:Short description",
"Template:Citation",
"Template:S-reg",
"Template:Sfn",
"Template:S-bef",
"Template:S-off",
"Template:Roman emperors",
"Template:Cite encyclopedia",
"Template:Tree chart",
"Template:S-hou",
"Template:Infobox royalty",
"Template:See also",
"Template:ISBN",
"Template:S-start",
"Template:Use dmy dates",
"Template:Constantinian dynasty family tree",
"Template:Tree chart/end",
"Template:Break",
"Template:Chart bottom",
"Template:S-ttl",
"Template:Authority control",
"Template:Lang-la"
] | https://en.wikipedia.org/wiki/Constantine_II_(emperor) |
7,235 | Constantine II of Scotland | Causantín mac Áeda (Modern Gaelic: Còiseam mac Aoidh, anglicised Constantine II; born no later than 879; died 952) was an early King of Scotland, known then by the Gaelic name Alba. The Kingdom of Alba, a name which first appears in Constantine's lifetime, was situated in modern-day Northern Scotland.
The core of the kingdom was formed by the lands around the River Tay. Its southern limit was the River Forth, northwards it extended towards the Moray Firth and perhaps to Caithness, while its western limits are uncertain. Constantine's grandfather Kenneth I of Scotland (Cináed mac Ailpín, died 858) was the first of the family recorded as a king, but as king of the Picts. This change of title, from king of the Picts to king of Alba, is part of a broader transformation of Pictland and the origins of the Kingdom of Alba are traced to Constantine's lifetime.
His reign, like those of his predecessors, was dominated by the actions of Viking rulers in the British Isles, particularly the Uí Ímair ('Grandsons/Descenants of Ímar', or Ivar the Boneless). During Constantine's reign, the rulers of the southern kingdoms of Wessex and Mercia, later the Kingdom of England, extended their authority northwards into the disputed kingdoms of Northumbria. At first, the southern rulers allied with him against the Vikings, but in 934 Æthelstan, unprovoked, invaded Scotland both by sea and land with a huge retinue that included four Welsh kings. He ravaged southern Alba, but there is no record of any battles. He had withdrawn by September. Three years later, in 937, probably in retaliation for the invasion of Alba, King Constantine allied with Olaf Guthfrithson, King of Dublin, and Owain ap Dyfnwal, King of Strathclyde, but they were defeated at the battle of Brunanburh. In 943, Constantine abdicated the throne and retired to the Céli Dé (Culdee) monastery of St Andrews where he died in 952. He was succeeded by his predecessor's son Malcolm I (Máel Coluim mac Domnaill).
Constantine's reign of 43 years, exceeded in Scotland only by that of King William the Lion before the Union of the Crowns in 1603, is believed to have played a defining part in the Gaelicisation of Pictland, in which his patronage of the Irish Céli Dé monastic reformers was a significant factor. During his reign, the words "Scots" and "Scotland" (Old English: Scottas, Scotland) are first used to mean part of what is now Scotland. The earliest evidence for the ecclesiastical and administrative institutions which would last until the Davidian Revolution also appears at this time.
Compared to neighbouring Ireland and Anglo-Saxon England, few records of 9th- and 10th-century events in Scotland survive. The main local source from the period is the Chronicle of the Kings of Alba, a list of kings from Kenneth MacAlpin (died 858) to Kenneth II (Cináed mac Maíl Coluim, died 995). The list survives in the Poppleton Manuscript, a 13th-century compilation. Originally simply a list of kings with reign lengths, the other details contained in the Poppleton Manuscript version were added in the 10th and 12th centuries. In addition to this, later king lists survive. The earliest genealogical records of the descendants of Kenneth MacAlpin may date from the end of the 10th century, but their value lies more in their context, and the information they provide about the interests of those for whom they were compiled, than in the unreliable claims they contain.
For narrative history the principal sources are the Anglo-Saxon Chronicle and the Irish annals. The evidence from charters created in the Kingdom of England provides occasional insight into events in Scotland. While Scandinavian sagas describe events in 10th-century Britain, their value as sources of historical narrative, rather than documents of social history, is disputed. Mainland European sources rarely concern themselves with affairs in any part of the British Isles, and even less commonly with events in Scotland, but the life of Saint Cathróe of Metz, a work of hagiography written in Germany at the end of the 10th century, provides plausible details of the saint's early life in north Britain.
While the sources for north-eastern Britain, the lands of the kingdom of Northumbria and the former Pictland, are limited and late, those for the areas on the Irish Sea and Atlantic coasts—the modern regions of north-west England and all of northern and western Scotland—are non-existent, and archaeology and toponymy are of primary importance.
The dominant kingdom in eastern Scotland before the Viking Age was the northern Pictish kingdom of Fortriu on the shores of the Moray Firth. By the 9th century, the Gaels of Dál Riata (Dalriada) were subject to the kings of Fortriu of the family of Constantín mac Fergusa (Constantine son of Fergus). Constantín's family dominated Fortriu after 789 and perhaps, if Constantín was a kinsman of Óengus I of the Picts (Óengus son of Fergus), from around 730. The dominance of Fortriu came to an end in 839 with a defeat by Viking armies reported by the Annals of Ulster in which King Uen of Fortriu and his brother Bran, Constantín's nephews, together with the king of Dál Riata, Áed mac Boanta, "and others almost innumerable" were killed. These deaths led to a period of instability lasting a decade as several families attempted to establish their dominance in Pictland. By around 848 Kenneth MacAlpin had emerged as the winner.
Later national myth made Kenneth MacAlpin the creator of the kingdom of Scotland, the founding of which was dated from 843, the year in which he was said to have destroyed the Picts and inaugurated a new era. The historical record for 9th-century Scotland is meagre, but the Irish annals and the 10th-century Chronicle of the Kings of Alba agree that Kenneth was a Pictish king, and call him "king of the Picts" at his death. The same style is used of Kenneth's brother Donald I (Domnall mac Ailpín) and sons Constantine I (Constantín mac Cináeda) and Áed (Áed mac Cináeda).
The kingdom ruled by Kenneth's descendants—older works used the name House of Alpin to describe them but descent from Kenneth was the defining factor, Irish sources referring to Clann Cináeda meic Ailpín ("the Clan of Kenneth MacAlpin")—lay to the south of the previously dominant kingdom of Fortriu, centred in the lands around the River Tay. The extent of Kenneth's nameless kingdom is uncertain, but it certainly extended from the Firth of Forth in the south to the Mounth in the north. Whether it extended beyond the mountainous spine of north Britain—Druim Alban—is unclear. The core of the kingdom was similar to the old counties of Mearns, Forfar, Perth, Fife, and Kinross. Among the chief ecclesiastical centres named in the records are Dunkeld, probably seat of the bishop of the kingdom, and Cell Rígmonaid (modern St Andrews).
Kenneth's son Constantine died in 876, probably killed fighting against a Viking army that had come north from Northumbria in 874. According to the king lists, he was counted as the 70th and last king of the Picts in later times.
In 899 Alfred the Great, king of Wessex, died leaving his son Edward the Elder as ruler of England south of the River Thames and his daughter Æthelflæd and son-in-law Æthelred ruling the western, English part of Mercia. The situation in the Danish kingdoms of eastern England is less clear. King Eohric was probably ruling in East Anglia, but no dates can reliably be assigned to the successors of Guthfrith of York in Northumbria. It is known that Guthfrith was succeeded by Sigurd and Cnut, although whether these men ruled jointly or one after the other is uncertain. Northumbria may have been divided by this time between the Viking kings in York and the local rulers, perhaps represented by Eadulf, based at Bamburgh who controlled the lands from the River Tyne or River Tees to the Forth in the north.
In Ireland, Flann Sinna, married to Constantine's aunt Máel Muire, was dominant. The years around 900 represented a period of weakness among the Vikings and Norse-Gaels of Dublin. They are reported to have been divided between two rival leaders. In 894 one group left Dublin, perhaps settling on the Irish Sea coast of Britain between the River Mersey and the Firth of Clyde. The remaining Dubliners were expelled in 902 by Flann Sinna's son-in-law Cerball mac Muirecáin, and soon afterwards appeared in western and northern Britain.
To the southwest of Constantine's lands lay the kingdom of Strathclyde. This extended north into the Lennox, east to the River Forth, and south into the Southern Uplands. In 900 it was probably ruled by King Dyfnwal.
The situation of the Gaelic kingdoms of Dál Riata in western Scotland is uncertain. No kings are known by name after Áed mac Boanta. The Frankish Annales Bertiniani may record the conquest of the Inner Hebrides, the seaward part of Dál Riata, by Northmen in 849. In addition to these, the arrival of new groups of Vikings from northern and western Europe was still commonplace. Whether there were Viking or Norse-Gael kingdoms in the Western Isles or the Northern Isles at this time is debated.
Áed, Constantine's father, succeeded Constantine's uncle and namesake Constantine I in 876 but was killed in 878. Áed's short reign is glossed as being of no importance by most king lists. Although the date of his birth is nowhere recorded, Constantine II cannot have been born any later than the year after his father's death, i.e., 879. His name may suggest that he was born a few years earlier, during the reign of his uncle Constantine I.
After Áed's death, there is a two-decade gap until the death of Donald II (Domnall mac Constantín) in 900 during which nothing is reported in the Irish annals. The entry for the reign between Áed and Donald II is corrupt in the Chronicle of the Kings of Alba, and in this case, the Chronicle is at variance with every other king list. According to the Chronicle, Áed was followed by Eochaid, a grandson of Kenneth MacAlpin, who is somehow connected with Giric, but all other lists say that Giric ruled after Áed and make great claims for him. Giric is not known to have been a kinsman of Kenneth's, although it has been suggested that he was related to him by marriage. The major changes in Pictland which began at about this time have been associated by Alex Woolf and Archie Duncan with Giric's reign.
Woolf suggests that Constantine and his younger brother Donald may have passed Giric's reign in exile in Ireland where their aunt Máel Muire was wife of two successive High Kings of Ireland, Áed Findliath and Flann Sinna. Giric died in 889. If he had been in exile, Constantine may have returned to Pictland where his cousin Donald II became king. Donald's reputation is suggested by the epithet dasachtach, a word used of violent madmen and mad bulls, attached to him in the 11th-century writings of Flann Mainistrech, echoed by his description in the Prophecy of Berchan as "the rough one who will think relics and psalms of little worth". Wars with the Viking kings in Britain and Ireland continued during Donald's reign and he was probably killed fighting yet more Vikings at Dunnottar in the Mearns in 900. Constantine succeeded him as king.
The earliest event recorded in the Chronicle of the Kings of Alba in Constantine's reign is an attack by Vikings and the plundering of Dunkeld "and all Albania" in his third year. This is the first use of the word Albania, the Latin form of the Old Irish Alba, in the Chronicle which until then describes the lands ruled by the descendants of Cináed as Pictavia.
These Norsemen could have been some of those who were driven out of Dublin in 902, or were the same group who had defeated Domnall in 900. The Chronicle states that the Northmen were killed in Srath Erenn, which is confirmed by the Annals of Ulster which records the death of Ímar grandson of Ímar and many others at the hands of the men of Fortriu in 904. This Ímar was the first of the Uí Ímair, the grandsons of Ímar, to be reported; three more grandsons of Ímar appear later in Constantín's reign. The Fragmentary Annals of Ireland contain an account of the battle, and this attributes the defeat of the Norsemen to the intercession of Saint Columba following fasting and prayer. An entry in the Chronicon Scotorum under the year 904 may possibly contain a corrupted reference to this battle.
The next event reported by the Chronicle of the Kings of Alba is dated to 906. This records that:
King Constantine and Bishop Cellach met at the Hill of Belief near the royal city of Scone and pledged themselves that the laws and disciplines of the faith, and the laws of churches and gospels, should be kept pariter cum Scottis.
The meaning of this entry, and its significance, have been the subject of debate.
The phrase pariter cum Scottis in the Latin text of the Chronicle has been translated in several ways. William Forbes Skene and Alan Orr Anderson proposed that it should be read as "in conformity with the customs of the Gaels", relating it to the claims in the king lists that Giric liberated the church from secular oppression and adopted Irish customs. It has been read as "together with the Gaels", suggesting either public participation or the presence of Gaels from the western coasts as well as the people of the east coast. Finally, it is suggested that it was the ceremony that followed "the custom of the Gaels" and not the agreements.
The idea that this gathering agreed to uphold Irish laws governing the church has suggested that it was an important step in the gaelicisation of the lands east of Druim Alban. Others have proposed that the ceremony in some way endorsed Constantine's kingship, prefiguring later royal inaugurations at Scone. Alternatively, if Bishop Cellach was appointed by Giric, it may be that the gathering was intended to heal a rift between king and church.
Following the events at Scone, there is little of substance reported for a decade. A story in the Fragmentary Annals of Ireland, perhaps referring to events sometime after 911, claims that Queen Æthelflæd, who ruled in Mercia, allied with the Irish and northern rulers against the Norsemen on the Irish sea coasts of Northumbria. The Annals of Ulster record the defeat of an Irish fleet from the kingdom of Ulaid by Vikings "on the coast of England" at about this time.
In this period the Chronicle of the Kings of Alba reports the death of Cormac mac Cuilennáin, king of Munster, in the eighth year of Constantine's reign. This is followed by an undated entry which was formerly read as "In his time Domnall [i.e. Dyfnwal], king of the [Strathclyde] Britons died, and Domnall son of Áed was elected". This was thought to record the election of a brother of Constantine named Domnall to the kingship of the Britons of Strathclyde and was seen as early evidence of the domination of Strathclyde by the kings of Alba. The entry in question is now read as "...Dyfnwal... and Domnall son Áed king of Ailech died", this Domnall being a son of Áed Findliath who died on 21 March 915. Finally, the deaths of Flann Sinna and Niall Glúndub are recorded.
There are more reports of Viking fleets in the Irish Sea from 914 onwards. By 916 fleets under Sihtric Cáech and Ragnall, said to be grandsons of Ímar (that is, they belonged to the same Uí Ímair kindred as the Ímar who was killed in 904), were very active in Ireland. Sihtric inflicted a heavy defeat on the armies of Leinster and retook Dublin in 917. The following year Ragnall appears to have returned across the Irish sea intent on establishing himself as king at York. The only precisely dated event in the summer of 918 is the death of Queen Æthelflæd on 12 June 918 at Tamworth, Staffordshire. Æthelflæd had been negotiating with the Northumbrians to obtain their submission, but her death put an end to this and her successor, her brother Edward the Elder, was occupied with securing control of Mercia.
The northern part of Northumbria, and perhaps the whole kingdom, had probably been ruled by Ealdred son of Eadulf since 913. Faced with Ragnall's invasion, Ealdred came north seeking assistance from Constantine. The two advanced south to face Ragnall, and this led to a battle somewhere on the banks of the River Tyne, probably at Corbridge where Dere Street crosses the river. The Battle of Corbridge appears to have been indecisive; the Chronicle of the Kings of Alba is alone in giving Constantine the victory.
The report of the battle in the Annals of Ulster says that none of the kings or mormaers among the men of Alba were killed. This is the first surviving use of the word mormaer; other than the knowledge that Constantine's kingdom had its own bishop or bishops and royal villas, this is the only hint to the institutions of the kingdom.
After Corbridge, Ragnall enjoyed only a short respite. In the south, Alfred's son Edward had rapidly secured control of Mercia and had a burh constructed at Bakewell in the Peak District from which his armies could easily strike north. An army from Dublin led by Ragnall's kinsman Sihtric struck at north-western Mercia in 919, but in 920 or 921 Edward met with Ragnall and other kings. The Anglo-Saxon Chronicle states that these kings "chose Edward as father and lord". Among the other kings present were Constantine, Ealdred son of Eadwulf, and the king of Strathclyde, Owain ap Dyfnwal. Here, again, a new term appears in the record, the Anglo-Saxon Chronicle for the first time using the word scottas, from which Scots derives, to describe the inhabitants of Constantine's kingdom in its report of these events.
Edward died in 924. His realms appear to have been divided with the West Saxons recognising Ælfweard while the Mercians chose Æthelstan who had been raised at Æthelflæd's court. Ælfweard died within weeks of his father and Æthelstan was inaugurated as king of all of Edward's lands in 925.
By 926 Sihtric had evidently acknowledged Æthelstan as overlord, adopting Christianity and marrying a sister of Æthelstan at Tamworth. Within the year he appears to have forsaken his new faith and repudiated his wife, but before Æthelstan could respond, Sihtric died suddenly in 927. His kinsman, perhaps brother, Gofraid, who had remained as his deputy in Dublin, came from Ireland to take power in York, but failed. Æthelstan moved quickly, seizing much of Northumbria. In less than a decade, the kingdom of the English had become by far the greatest power in Britain and Ireland, perhaps stretching as far north as the Firth of Forth.
John of Worcester's chronicle suggests that Æthelstan faced opposition from Constantine, Owain, and the Welsh kings. William of Malmesbury writes that Gofraid, together with Sihtric's young son Olaf Cuaran fled north and received refuge from Constantine, which led to war with Æthelstan. A meeting at Eamont Bridge on 12 July 927 was sealed by an agreement that Constantine, Owain, Hywel Dda, and Ealdred would "renounce all idolatry": that is, they would not ally with the Viking kings. William states that Æthelstan stood godfather to a son of Constantine, probably Indulf (Ildulb mac Constantín), during the conference.
Æthelstan followed up his advances in the north by securing the recognition of the Welsh kings. For the next seven years, the record of events in the north is blank. Æthelstan's court was attended by the Welsh kings, but not by Constantine or Owain. This absence of record means that Æthelstan's reasons for marching north against Constantine in 934 are unclear.
Æthelstan's invasion is reported in brief by the Anglo-Saxon Chronicle, and later chroniclers such as John of Worcester, William of Malmesbury, Henry of Huntingdon, and Symeon of Durham add detail to that bald account. Æthelstan's army began gathering at Winchester by 28 May 934, and travelled north to Nottingham by 7 June. He was accompanied by many leaders, including the Welsh kings Hywel Dda, Idwal Foel, and Morgan ab Owain. From Mercia the army continued to Chester-le-Street, before resuming the march accompanied by a fleet of ships. Owain was defeated and Symeon states that the army went as far north as Dunnottar and Fortriu, while the fleet is said to have raided Caithness, by which a much larger area, including Sutherland, is probably intended. It is unlikely that Constantine's personal authority extended so far north, so the attacks were probably directed at his allies, comprising simple looting expeditions.
The Annals of Clonmacnoise state that "the Scottish men compelled [Æthelstan] to return without any great victory", while Henry of Huntingdon claims that the English faced no opposition. A negotiated settlement might have ended matters: according to John of Worcester, a son of Constantine was given as a hostage to Æthelstan and Constantine himself accompanied the English king on his return south. He witnessed a charter with Æthelstan at Buckingham on 13 September 934 in which he is described as subregulus, i.e., a king acknowledging Æthelstan's overlordship, the only place there is any record of such a description. However, there is no record of Constantine having ever submitted to Æthelstan's overlordship or that he considered himself such. The following year, Constantine was again in England at Æthelstan's court, this time at Cirencester where he appears as a witness, as the first of several kings, followed by Owain and Hywel Dda, who subscribed to the diploma. At Christmas of 935, Owain was once more at Æthelstan's court along with the Welsh kings, but Constantine was not. His return to England less than two years later would be in very different circumstances.
Following his departure from Æthelstan's court after 935, there is no further report of Constantine until 937. In that year, together with Owain and Olaf Guthfrithson of Dublin, Constantine invaded England. The resulting battle of Brunanburh—Dún Brunde—is reported in the Annals of Ulster as follows:
a great battle, lamentable and terrible was cruelly fought... in which fell uncounted thousands of the Northmen. ...And on the other side, a multitude of Saxons fell; but Æthelstan, the king of the Saxons, obtained a great victory.
The battle was remembered in England a generation later as "the Great Battle". When reporting the battle, the Anglo-Saxon Chronicle abandons its usual terse style in favour of a heroic poem vaunting the great victory. In this, the "hoary" Constantine, by now around 60 years of age, is said to have lost a son in the battle, a claim which the Chronicle of the Kings of Alba confirms. The Annals of Clonmacnoise give his name as Cellach. For all its fame, the site of the battle is uncertain and several sites have been advanced, with Bromborough on the Wirral the most favoured location.
Brunanburh, for all that it had been a famous and bloody battle, settled nothing. On 27 October 939 Æthelstan, the "pillar of the dignity of the western world" in the words of the Annals of Ulster, died at Malmesbury. He was succeeded by his brother Edmund, then aged 18. Æthelstan's realm, seemingly made safe by the victory of Brunanburh, collapsed in little more than a year from his death when Amlaíb returned from Ireland and seized Northumbria and the Mercian Danelaw. Edmund spent the remainder of Constantín's reign rebuilding his kingdom.
For Constantine's last years as king, there is only the meagre record of the Chronicle of the Kings of Alba. The death of Æthelstan is reported, as are two others. The first of these, in 938, is that of Dubacan, mormaer of Angus or son of the mormaer. Unlike the report of 918, on this occasion, the title mormaer is attached to a geographical area, but it is unknown whether the Angus of 938 was in any way similar to the later mormaerdom or earldom. The second death, entered with that of Æthelstan, is that of Eochaid mac Ailpín, who might, from his name, have been a kinsman of Constantín.
By the early 940s Constantine was an old man in his late sixties or seventies. The kingdom of Alba was too new to be said to have a customary rule of succession, but Pictish and Irish precedents favoured an adult successor descended from Kenneth MacAlpin. Constantine's surviving son Indulf, probably baptised in 927, would have been too young to be a serious candidate for the kingship in the early 940s, and the obvious heir was Constantine's nephew, Malcolm I. As Malcolm was born no later than 901, by the 940s he was no longer a young man, and may have been impatient. Willingly or not—the 11th-century Prophecy of Berchán, a verse history in the form of a supposed prophecy, states that it was not a voluntary decision—Constantine abdicated in 943 and entered a monastery, leaving the kingdom to Malcolm.
Although his retirement might have been involuntary, the Life of Cathróe of Metz and the Prophecy of Berchán portray Constantine as a devout king. The monastery to which Constantine retired, and where he is said to have been abbot, was probably that of St Andrews. This had been refounded in his reign and given to the reforming Céli Dé (Culdee) movement. The Céli Dé were subsequently to be entrusted with many monasteries throughout the kingdom of Alba until replaced in the 12th century by new orders imported from France.
Seven years later the Chronicle of the Kings of Alba says:
[Malcolm I] plundered the English as far as the river Tees, and he seized a multitude of people and many herds of cattle: and the Scots called this the raid of Albidosorum, that is, Nainndisi. But others say that Constantine made this raid, asking of the king, Malcolm, that the kingship should be given to him for a week's time so that he could visit the English. In fact, it was Malcolm who made the raid, but Constantine incited him, as I have said.
Woolf suggests that the association of Constantine with the raid is a late addition, one derived from a now-lost saga or poem.
Constantine's death in 952 is recorded by the Irish annals, who enter it among ecclesiastics. His son Indulf would become king on Malcolm's death. The last of Constantine's certain descendants to be king in Alba was a great-grandson, Constantine III (Constantín mac Cuiléin). Another son had died at Brunanburh, and, according to John of Worcester, Amlaíb mac Gofraid was married to a daughter of Constantine. It is possible that Constantine had other children, but like the name of his wife, or wives, this has not been recorded.
The form of kingdom which appeared in Constantine's reign continued in much the same way until the Davidian Revolution in the 12th century. As with his ecclesiastical reforms, his political legacy was the creation of a new form of Scottish kingship that lasted for two centuries after his death.
The name of Constantine's wife is not known, however, they are known to have had at least 3 children: | [
{
"paragraph_id": 0,
"text": "Causantín mac Áeda (Modern Gaelic: Còiseam mac Aoidh, anglicised Constantine II; born no later than 879; died 952) was an early King of Scotland, known then by the Gaelic name Alba. The Kingdom of Alba, a name which first appears in Constantine's lifetime, was situated in modern-day Northern Scotland.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The core of the kingdom was formed by the lands around the River Tay. Its southern limit was the River Forth, northwards it extended towards the Moray Firth and perhaps to Caithness, while its western limits are uncertain. Constantine's grandfather Kenneth I of Scotland (Cináed mac Ailpín, died 858) was the first of the family recorded as a king, but as king of the Picts. This change of title, from king of the Picts to king of Alba, is part of a broader transformation of Pictland and the origins of the Kingdom of Alba are traced to Constantine's lifetime.",
"title": ""
},
{
"paragraph_id": 2,
"text": "His reign, like those of his predecessors, was dominated by the actions of Viking rulers in the British Isles, particularly the Uí Ímair ('Grandsons/Descenants of Ímar', or Ivar the Boneless). During Constantine's reign, the rulers of the southern kingdoms of Wessex and Mercia, later the Kingdom of England, extended their authority northwards into the disputed kingdoms of Northumbria. At first, the southern rulers allied with him against the Vikings, but in 934 Æthelstan, unprovoked, invaded Scotland both by sea and land with a huge retinue that included four Welsh kings. He ravaged southern Alba, but there is no record of any battles. He had withdrawn by September. Three years later, in 937, probably in retaliation for the invasion of Alba, King Constantine allied with Olaf Guthfrithson, King of Dublin, and Owain ap Dyfnwal, King of Strathclyde, but they were defeated at the battle of Brunanburh. In 943, Constantine abdicated the throne and retired to the Céli Dé (Culdee) monastery of St Andrews where he died in 952. He was succeeded by his predecessor's son Malcolm I (Máel Coluim mac Domnaill).",
"title": ""
},
{
"paragraph_id": 3,
"text": "Constantine's reign of 43 years, exceeded in Scotland only by that of King William the Lion before the Union of the Crowns in 1603, is believed to have played a defining part in the Gaelicisation of Pictland, in which his patronage of the Irish Céli Dé monastic reformers was a significant factor. During his reign, the words \"Scots\" and \"Scotland\" (Old English: Scottas, Scotland) are first used to mean part of what is now Scotland. The earliest evidence for the ecclesiastical and administrative institutions which would last until the Davidian Revolution also appears at this time.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Compared to neighbouring Ireland and Anglo-Saxon England, few records of 9th- and 10th-century events in Scotland survive. The main local source from the period is the Chronicle of the Kings of Alba, a list of kings from Kenneth MacAlpin (died 858) to Kenneth II (Cináed mac Maíl Coluim, died 995). The list survives in the Poppleton Manuscript, a 13th-century compilation. Originally simply a list of kings with reign lengths, the other details contained in the Poppleton Manuscript version were added in the 10th and 12th centuries. In addition to this, later king lists survive. The earliest genealogical records of the descendants of Kenneth MacAlpin may date from the end of the 10th century, but their value lies more in their context, and the information they provide about the interests of those for whom they were compiled, than in the unreliable claims they contain.",
"title": "Sources"
},
{
"paragraph_id": 5,
"text": "For narrative history the principal sources are the Anglo-Saxon Chronicle and the Irish annals. The evidence from charters created in the Kingdom of England provides occasional insight into events in Scotland. While Scandinavian sagas describe events in 10th-century Britain, their value as sources of historical narrative, rather than documents of social history, is disputed. Mainland European sources rarely concern themselves with affairs in any part of the British Isles, and even less commonly with events in Scotland, but the life of Saint Cathróe of Metz, a work of hagiography written in Germany at the end of the 10th century, provides plausible details of the saint's early life in north Britain.",
"title": "Sources"
},
{
"paragraph_id": 6,
"text": "While the sources for north-eastern Britain, the lands of the kingdom of Northumbria and the former Pictland, are limited and late, those for the areas on the Irish Sea and Atlantic coasts—the modern regions of north-west England and all of northern and western Scotland—are non-existent, and archaeology and toponymy are of primary importance.",
"title": "Sources"
},
{
"paragraph_id": 7,
"text": "The dominant kingdom in eastern Scotland before the Viking Age was the northern Pictish kingdom of Fortriu on the shores of the Moray Firth. By the 9th century, the Gaels of Dál Riata (Dalriada) were subject to the kings of Fortriu of the family of Constantín mac Fergusa (Constantine son of Fergus). Constantín's family dominated Fortriu after 789 and perhaps, if Constantín was a kinsman of Óengus I of the Picts (Óengus son of Fergus), from around 730. The dominance of Fortriu came to an end in 839 with a defeat by Viking armies reported by the Annals of Ulster in which King Uen of Fortriu and his brother Bran, Constantín's nephews, together with the king of Dál Riata, Áed mac Boanta, \"and others almost innumerable\" were killed. These deaths led to a period of instability lasting a decade as several families attempted to establish their dominance in Pictland. By around 848 Kenneth MacAlpin had emerged as the winner.",
"title": "Pictland from Constantín mac Fergusa to Constantine I"
},
{
"paragraph_id": 8,
"text": "Later national myth made Kenneth MacAlpin the creator of the kingdom of Scotland, the founding of which was dated from 843, the year in which he was said to have destroyed the Picts and inaugurated a new era. The historical record for 9th-century Scotland is meagre, but the Irish annals and the 10th-century Chronicle of the Kings of Alba agree that Kenneth was a Pictish king, and call him \"king of the Picts\" at his death. The same style is used of Kenneth's brother Donald I (Domnall mac Ailpín) and sons Constantine I (Constantín mac Cináeda) and Áed (Áed mac Cináeda).",
"title": "Pictland from Constantín mac Fergusa to Constantine I"
},
{
"paragraph_id": 9,
"text": "The kingdom ruled by Kenneth's descendants—older works used the name House of Alpin to describe them but descent from Kenneth was the defining factor, Irish sources referring to Clann Cináeda meic Ailpín (\"the Clan of Kenneth MacAlpin\")—lay to the south of the previously dominant kingdom of Fortriu, centred in the lands around the River Tay. The extent of Kenneth's nameless kingdom is uncertain, but it certainly extended from the Firth of Forth in the south to the Mounth in the north. Whether it extended beyond the mountainous spine of north Britain—Druim Alban—is unclear. The core of the kingdom was similar to the old counties of Mearns, Forfar, Perth, Fife, and Kinross. Among the chief ecclesiastical centres named in the records are Dunkeld, probably seat of the bishop of the kingdom, and Cell Rígmonaid (modern St Andrews).",
"title": "Pictland from Constantín mac Fergusa to Constantine I"
},
{
"paragraph_id": 10,
"text": "Kenneth's son Constantine died in 876, probably killed fighting against a Viking army that had come north from Northumbria in 874. According to the king lists, he was counted as the 70th and last king of the Picts in later times.",
"title": "Pictland from Constantín mac Fergusa to Constantine I"
},
{
"paragraph_id": 11,
"text": "",
"title": "Pictland from Constantín mac Fergusa to Constantine I"
},
{
"paragraph_id": 12,
"text": "In 899 Alfred the Great, king of Wessex, died leaving his son Edward the Elder as ruler of England south of the River Thames and his daughter Æthelflæd and son-in-law Æthelred ruling the western, English part of Mercia. The situation in the Danish kingdoms of eastern England is less clear. King Eohric was probably ruling in East Anglia, but no dates can reliably be assigned to the successors of Guthfrith of York in Northumbria. It is known that Guthfrith was succeeded by Sigurd and Cnut, although whether these men ruled jointly or one after the other is uncertain. Northumbria may have been divided by this time between the Viking kings in York and the local rulers, perhaps represented by Eadulf, based at Bamburgh who controlled the lands from the River Tyne or River Tees to the Forth in the north.",
"title": "Britain and Ireland at the end of the 9th century"
},
{
"paragraph_id": 13,
"text": "In Ireland, Flann Sinna, married to Constantine's aunt Máel Muire, was dominant. The years around 900 represented a period of weakness among the Vikings and Norse-Gaels of Dublin. They are reported to have been divided between two rival leaders. In 894 one group left Dublin, perhaps settling on the Irish Sea coast of Britain between the River Mersey and the Firth of Clyde. The remaining Dubliners were expelled in 902 by Flann Sinna's son-in-law Cerball mac Muirecáin, and soon afterwards appeared in western and northern Britain.",
"title": "Britain and Ireland at the end of the 9th century"
},
{
"paragraph_id": 14,
"text": "To the southwest of Constantine's lands lay the kingdom of Strathclyde. This extended north into the Lennox, east to the River Forth, and south into the Southern Uplands. In 900 it was probably ruled by King Dyfnwal.",
"title": "Britain and Ireland at the end of the 9th century"
},
{
"paragraph_id": 15,
"text": "The situation of the Gaelic kingdoms of Dál Riata in western Scotland is uncertain. No kings are known by name after Áed mac Boanta. The Frankish Annales Bertiniani may record the conquest of the Inner Hebrides, the seaward part of Dál Riata, by Northmen in 849. In addition to these, the arrival of new groups of Vikings from northern and western Europe was still commonplace. Whether there were Viking or Norse-Gael kingdoms in the Western Isles or the Northern Isles at this time is debated.",
"title": "Britain and Ireland at the end of the 9th century"
},
{
"paragraph_id": 16,
"text": "Áed, Constantine's father, succeeded Constantine's uncle and namesake Constantine I in 876 but was killed in 878. Áed's short reign is glossed as being of no importance by most king lists. Although the date of his birth is nowhere recorded, Constantine II cannot have been born any later than the year after his father's death, i.e., 879. His name may suggest that he was born a few years earlier, during the reign of his uncle Constantine I.",
"title": "Early life"
},
{
"paragraph_id": 17,
"text": "After Áed's death, there is a two-decade gap until the death of Donald II (Domnall mac Constantín) in 900 during which nothing is reported in the Irish annals. The entry for the reign between Áed and Donald II is corrupt in the Chronicle of the Kings of Alba, and in this case, the Chronicle is at variance with every other king list. According to the Chronicle, Áed was followed by Eochaid, a grandson of Kenneth MacAlpin, who is somehow connected with Giric, but all other lists say that Giric ruled after Áed and make great claims for him. Giric is not known to have been a kinsman of Kenneth's, although it has been suggested that he was related to him by marriage. The major changes in Pictland which began at about this time have been associated by Alex Woolf and Archie Duncan with Giric's reign.",
"title": "Early life"
},
{
"paragraph_id": 18,
"text": "Woolf suggests that Constantine and his younger brother Donald may have passed Giric's reign in exile in Ireland where their aunt Máel Muire was wife of two successive High Kings of Ireland, Áed Findliath and Flann Sinna. Giric died in 889. If he had been in exile, Constantine may have returned to Pictland where his cousin Donald II became king. Donald's reputation is suggested by the epithet dasachtach, a word used of violent madmen and mad bulls, attached to him in the 11th-century writings of Flann Mainistrech, echoed by his description in the Prophecy of Berchan as \"the rough one who will think relics and psalms of little worth\". Wars with the Viking kings in Britain and Ireland continued during Donald's reign and he was probably killed fighting yet more Vikings at Dunnottar in the Mearns in 900. Constantine succeeded him as king.",
"title": "Early life"
},
{
"paragraph_id": 19,
"text": "The earliest event recorded in the Chronicle of the Kings of Alba in Constantine's reign is an attack by Vikings and the plundering of Dunkeld \"and all Albania\" in his third year. This is the first use of the word Albania, the Latin form of the Old Irish Alba, in the Chronicle which until then describes the lands ruled by the descendants of Cináed as Pictavia.",
"title": "Vikings and bishops"
},
{
"paragraph_id": 20,
"text": "These Norsemen could have been some of those who were driven out of Dublin in 902, or were the same group who had defeated Domnall in 900. The Chronicle states that the Northmen were killed in Srath Erenn, which is confirmed by the Annals of Ulster which records the death of Ímar grandson of Ímar and many others at the hands of the men of Fortriu in 904. This Ímar was the first of the Uí Ímair, the grandsons of Ímar, to be reported; three more grandsons of Ímar appear later in Constantín's reign. The Fragmentary Annals of Ireland contain an account of the battle, and this attributes the defeat of the Norsemen to the intercession of Saint Columba following fasting and prayer. An entry in the Chronicon Scotorum under the year 904 may possibly contain a corrupted reference to this battle.",
"title": "Vikings and bishops"
},
{
"paragraph_id": 21,
"text": "The next event reported by the Chronicle of the Kings of Alba is dated to 906. This records that:",
"title": "Vikings and bishops"
},
{
"paragraph_id": 22,
"text": "King Constantine and Bishop Cellach met at the Hill of Belief near the royal city of Scone and pledged themselves that the laws and disciplines of the faith, and the laws of churches and gospels, should be kept pariter cum Scottis.",
"title": "Vikings and bishops"
},
{
"paragraph_id": 23,
"text": "The meaning of this entry, and its significance, have been the subject of debate.",
"title": "Vikings and bishops"
},
{
"paragraph_id": 24,
"text": "The phrase pariter cum Scottis in the Latin text of the Chronicle has been translated in several ways. William Forbes Skene and Alan Orr Anderson proposed that it should be read as \"in conformity with the customs of the Gaels\", relating it to the claims in the king lists that Giric liberated the church from secular oppression and adopted Irish customs. It has been read as \"together with the Gaels\", suggesting either public participation or the presence of Gaels from the western coasts as well as the people of the east coast. Finally, it is suggested that it was the ceremony that followed \"the custom of the Gaels\" and not the agreements.",
"title": "Vikings and bishops"
},
{
"paragraph_id": 25,
"text": "The idea that this gathering agreed to uphold Irish laws governing the church has suggested that it was an important step in the gaelicisation of the lands east of Druim Alban. Others have proposed that the ceremony in some way endorsed Constantine's kingship, prefiguring later royal inaugurations at Scone. Alternatively, if Bishop Cellach was appointed by Giric, it may be that the gathering was intended to heal a rift between king and church.",
"title": "Vikings and bishops"
},
{
"paragraph_id": 26,
"text": "Following the events at Scone, there is little of substance reported for a decade. A story in the Fragmentary Annals of Ireland, perhaps referring to events sometime after 911, claims that Queen Æthelflæd, who ruled in Mercia, allied with the Irish and northern rulers against the Norsemen on the Irish sea coasts of Northumbria. The Annals of Ulster record the defeat of an Irish fleet from the kingdom of Ulaid by Vikings \"on the coast of England\" at about this time.",
"title": "Return of the Uí Ímair"
},
{
"paragraph_id": 27,
"text": "In this period the Chronicle of the Kings of Alba reports the death of Cormac mac Cuilennáin, king of Munster, in the eighth year of Constantine's reign. This is followed by an undated entry which was formerly read as \"In his time Domnall [i.e. Dyfnwal], king of the [Strathclyde] Britons died, and Domnall son of Áed was elected\". This was thought to record the election of a brother of Constantine named Domnall to the kingship of the Britons of Strathclyde and was seen as early evidence of the domination of Strathclyde by the kings of Alba. The entry in question is now read as \"...Dyfnwal... and Domnall son Áed king of Ailech died\", this Domnall being a son of Áed Findliath who died on 21 March 915. Finally, the deaths of Flann Sinna and Niall Glúndub are recorded.",
"title": "Return of the Uí Ímair"
},
{
"paragraph_id": 28,
"text": "There are more reports of Viking fleets in the Irish Sea from 914 onwards. By 916 fleets under Sihtric Cáech and Ragnall, said to be grandsons of Ímar (that is, they belonged to the same Uí Ímair kindred as the Ímar who was killed in 904), were very active in Ireland. Sihtric inflicted a heavy defeat on the armies of Leinster and retook Dublin in 917. The following year Ragnall appears to have returned across the Irish sea intent on establishing himself as king at York. The only precisely dated event in the summer of 918 is the death of Queen Æthelflæd on 12 June 918 at Tamworth, Staffordshire. Æthelflæd had been negotiating with the Northumbrians to obtain their submission, but her death put an end to this and her successor, her brother Edward the Elder, was occupied with securing control of Mercia.",
"title": "Return of the Uí Ímair"
},
{
"paragraph_id": 29,
"text": "The northern part of Northumbria, and perhaps the whole kingdom, had probably been ruled by Ealdred son of Eadulf since 913. Faced with Ragnall's invasion, Ealdred came north seeking assistance from Constantine. The two advanced south to face Ragnall, and this led to a battle somewhere on the banks of the River Tyne, probably at Corbridge where Dere Street crosses the river. The Battle of Corbridge appears to have been indecisive; the Chronicle of the Kings of Alba is alone in giving Constantine the victory.",
"title": "Return of the Uí Ímair"
},
{
"paragraph_id": 30,
"text": "The report of the battle in the Annals of Ulster says that none of the kings or mormaers among the men of Alba were killed. This is the first surviving use of the word mormaer; other than the knowledge that Constantine's kingdom had its own bishop or bishops and royal villas, this is the only hint to the institutions of the kingdom.",
"title": "Return of the Uí Ímair"
},
{
"paragraph_id": 31,
"text": "After Corbridge, Ragnall enjoyed only a short respite. In the south, Alfred's son Edward had rapidly secured control of Mercia and had a burh constructed at Bakewell in the Peak District from which his armies could easily strike north. An army from Dublin led by Ragnall's kinsman Sihtric struck at north-western Mercia in 919, but in 920 or 921 Edward met with Ragnall and other kings. The Anglo-Saxon Chronicle states that these kings \"chose Edward as father and lord\". Among the other kings present were Constantine, Ealdred son of Eadwulf, and the king of Strathclyde, Owain ap Dyfnwal. Here, again, a new term appears in the record, the Anglo-Saxon Chronicle for the first time using the word scottas, from which Scots derives, to describe the inhabitants of Constantine's kingdom in its report of these events.",
"title": "Return of the Uí Ímair"
},
{
"paragraph_id": 32,
"text": "Edward died in 924. His realms appear to have been divided with the West Saxons recognising Ælfweard while the Mercians chose Æthelstan who had been raised at Æthelflæd's court. Ælfweard died within weeks of his father and Æthelstan was inaugurated as king of all of Edward's lands in 925.",
"title": "Return of the Uí Ímair"
},
{
"paragraph_id": 33,
"text": "By 926 Sihtric had evidently acknowledged Æthelstan as overlord, adopting Christianity and marrying a sister of Æthelstan at Tamworth. Within the year he appears to have forsaken his new faith and repudiated his wife, but before Æthelstan could respond, Sihtric died suddenly in 927. His kinsman, perhaps brother, Gofraid, who had remained as his deputy in Dublin, came from Ireland to take power in York, but failed. Æthelstan moved quickly, seizing much of Northumbria. In less than a decade, the kingdom of the English had become by far the greatest power in Britain and Ireland, perhaps stretching as far north as the Firth of Forth.",
"title": "Æthelstan"
},
{
"paragraph_id": 34,
"text": "John of Worcester's chronicle suggests that Æthelstan faced opposition from Constantine, Owain, and the Welsh kings. William of Malmesbury writes that Gofraid, together with Sihtric's young son Olaf Cuaran fled north and received refuge from Constantine, which led to war with Æthelstan. A meeting at Eamont Bridge on 12 July 927 was sealed by an agreement that Constantine, Owain, Hywel Dda, and Ealdred would \"renounce all idolatry\": that is, they would not ally with the Viking kings. William states that Æthelstan stood godfather to a son of Constantine, probably Indulf (Ildulb mac Constantín), during the conference.",
"title": "Æthelstan"
},
{
"paragraph_id": 35,
"text": "Æthelstan followed up his advances in the north by securing the recognition of the Welsh kings. For the next seven years, the record of events in the north is blank. Æthelstan's court was attended by the Welsh kings, but not by Constantine or Owain. This absence of record means that Æthelstan's reasons for marching north against Constantine in 934 are unclear.",
"title": "Æthelstan"
},
{
"paragraph_id": 36,
"text": "Æthelstan's invasion is reported in brief by the Anglo-Saxon Chronicle, and later chroniclers such as John of Worcester, William of Malmesbury, Henry of Huntingdon, and Symeon of Durham add detail to that bald account. Æthelstan's army began gathering at Winchester by 28 May 934, and travelled north to Nottingham by 7 June. He was accompanied by many leaders, including the Welsh kings Hywel Dda, Idwal Foel, and Morgan ab Owain. From Mercia the army continued to Chester-le-Street, before resuming the march accompanied by a fleet of ships. Owain was defeated and Symeon states that the army went as far north as Dunnottar and Fortriu, while the fleet is said to have raided Caithness, by which a much larger area, including Sutherland, is probably intended. It is unlikely that Constantine's personal authority extended so far north, so the attacks were probably directed at his allies, comprising simple looting expeditions.",
"title": "Æthelstan"
},
{
"paragraph_id": 37,
"text": "The Annals of Clonmacnoise state that \"the Scottish men compelled [Æthelstan] to return without any great victory\", while Henry of Huntingdon claims that the English faced no opposition. A negotiated settlement might have ended matters: according to John of Worcester, a son of Constantine was given as a hostage to Æthelstan and Constantine himself accompanied the English king on his return south. He witnessed a charter with Æthelstan at Buckingham on 13 September 934 in which he is described as subregulus, i.e., a king acknowledging Æthelstan's overlordship, the only place there is any record of such a description. However, there is no record of Constantine having ever submitted to Æthelstan's overlordship or that he considered himself such. The following year, Constantine was again in England at Æthelstan's court, this time at Cirencester where he appears as a witness, as the first of several kings, followed by Owain and Hywel Dda, who subscribed to the diploma. At Christmas of 935, Owain was once more at Æthelstan's court along with the Welsh kings, but Constantine was not. His return to England less than two years later would be in very different circumstances.",
"title": "Æthelstan"
},
{
"paragraph_id": 38,
"text": "Following his departure from Æthelstan's court after 935, there is no further report of Constantine until 937. In that year, together with Owain and Olaf Guthfrithson of Dublin, Constantine invaded England. The resulting battle of Brunanburh—Dún Brunde—is reported in the Annals of Ulster as follows:",
"title": "Brunanburh and after"
},
{
"paragraph_id": 39,
"text": "a great battle, lamentable and terrible was cruelly fought... in which fell uncounted thousands of the Northmen. ...And on the other side, a multitude of Saxons fell; but Æthelstan, the king of the Saxons, obtained a great victory.",
"title": "Brunanburh and after"
},
{
"paragraph_id": 40,
"text": "The battle was remembered in England a generation later as \"the Great Battle\". When reporting the battle, the Anglo-Saxon Chronicle abandons its usual terse style in favour of a heroic poem vaunting the great victory. In this, the \"hoary\" Constantine, by now around 60 years of age, is said to have lost a son in the battle, a claim which the Chronicle of the Kings of Alba confirms. The Annals of Clonmacnoise give his name as Cellach. For all its fame, the site of the battle is uncertain and several sites have been advanced, with Bromborough on the Wirral the most favoured location.",
"title": "Brunanburh and after"
},
{
"paragraph_id": 41,
"text": "Brunanburh, for all that it had been a famous and bloody battle, settled nothing. On 27 October 939 Æthelstan, the \"pillar of the dignity of the western world\" in the words of the Annals of Ulster, died at Malmesbury. He was succeeded by his brother Edmund, then aged 18. Æthelstan's realm, seemingly made safe by the victory of Brunanburh, collapsed in little more than a year from his death when Amlaíb returned from Ireland and seized Northumbria and the Mercian Danelaw. Edmund spent the remainder of Constantín's reign rebuilding his kingdom.",
"title": "Brunanburh and after"
},
{
"paragraph_id": 42,
"text": "For Constantine's last years as king, there is only the meagre record of the Chronicle of the Kings of Alba. The death of Æthelstan is reported, as are two others. The first of these, in 938, is that of Dubacan, mormaer of Angus or son of the mormaer. Unlike the report of 918, on this occasion, the title mormaer is attached to a geographical area, but it is unknown whether the Angus of 938 was in any way similar to the later mormaerdom or earldom. The second death, entered with that of Æthelstan, is that of Eochaid mac Ailpín, who might, from his name, have been a kinsman of Constantín.",
"title": "Brunanburh and after"
},
{
"paragraph_id": 43,
"text": "By the early 940s Constantine was an old man in his late sixties or seventies. The kingdom of Alba was too new to be said to have a customary rule of succession, but Pictish and Irish precedents favoured an adult successor descended from Kenneth MacAlpin. Constantine's surviving son Indulf, probably baptised in 927, would have been too young to be a serious candidate for the kingship in the early 940s, and the obvious heir was Constantine's nephew, Malcolm I. As Malcolm was born no later than 901, by the 940s he was no longer a young man, and may have been impatient. Willingly or not—the 11th-century Prophecy of Berchán, a verse history in the form of a supposed prophecy, states that it was not a voluntary decision—Constantine abdicated in 943 and entered a monastery, leaving the kingdom to Malcolm.",
"title": "Abdication and posterity"
},
{
"paragraph_id": 44,
"text": "Although his retirement might have been involuntary, the Life of Cathróe of Metz and the Prophecy of Berchán portray Constantine as a devout king. The monastery to which Constantine retired, and where he is said to have been abbot, was probably that of St Andrews. This had been refounded in his reign and given to the reforming Céli Dé (Culdee) movement. The Céli Dé were subsequently to be entrusted with many monasteries throughout the kingdom of Alba until replaced in the 12th century by new orders imported from France.",
"title": "Abdication and posterity"
},
{
"paragraph_id": 45,
"text": "Seven years later the Chronicle of the Kings of Alba says:",
"title": "Abdication and posterity"
},
{
"paragraph_id": 46,
"text": "[Malcolm I] plundered the English as far as the river Tees, and he seized a multitude of people and many herds of cattle: and the Scots called this the raid of Albidosorum, that is, Nainndisi. But others say that Constantine made this raid, asking of the king, Malcolm, that the kingship should be given to him for a week's time so that he could visit the English. In fact, it was Malcolm who made the raid, but Constantine incited him, as I have said.",
"title": "Abdication and posterity"
},
{
"paragraph_id": 47,
"text": "Woolf suggests that the association of Constantine with the raid is a late addition, one derived from a now-lost saga or poem.",
"title": "Abdication and posterity"
},
{
"paragraph_id": 48,
"text": "Constantine's death in 952 is recorded by the Irish annals, who enter it among ecclesiastics. His son Indulf would become king on Malcolm's death. The last of Constantine's certain descendants to be king in Alba was a great-grandson, Constantine III (Constantín mac Cuiléin). Another son had died at Brunanburh, and, according to John of Worcester, Amlaíb mac Gofraid was married to a daughter of Constantine. It is possible that Constantine had other children, but like the name of his wife, or wives, this has not been recorded.",
"title": "Abdication and posterity"
},
{
"paragraph_id": 49,
"text": "The form of kingdom which appeared in Constantine's reign continued in much the same way until the Davidian Revolution in the 12th century. As with his ecclesiastical reforms, his political legacy was the creation of a new form of Scottish kingship that lasted for two centuries after his death.",
"title": "Abdication and posterity"
},
{
"paragraph_id": 50,
"text": "The name of Constantine's wife is not known, however, they are known to have had at least 3 children:",
"title": "Family"
}
] | Causantín mac Áeda was an early King of Scotland, known then by the Gaelic name Alba. The Kingdom of Alba, a name which first appears in Constantine's lifetime, was situated in modern-day Northern Scotland. The core of the kingdom was formed by the lands around the River Tay. Its southern limit was the River Forth, northwards it extended towards the Moray Firth and perhaps to Caithness, while its western limits are uncertain. Constantine's grandfather Kenneth I of Scotland was the first of the family recorded as a king, but as king of the Picts. This change of title, from king of the Picts to king of Alba, is part of a broader transformation of Pictland and the origins of the Kingdom of Alba are traced to Constantine's lifetime. His reign, like those of his predecessors, was dominated by the actions of Viking rulers in the British Isles, particularly the Uí Ímair. During Constantine's reign, the rulers of the southern kingdoms of Wessex and Mercia, later the Kingdom of England, extended their authority northwards into the disputed kingdoms of Northumbria. At first, the southern rulers allied with him against the Vikings, but in 934 Æthelstan, unprovoked, invaded Scotland both by sea and land with a huge retinue that included four Welsh kings. He ravaged southern Alba, but there is no record of any battles. He had withdrawn by September. Three years later, in 937, probably in retaliation for the invasion of Alba, King Constantine allied with Olaf Guthfrithson, King of Dublin, and Owain ap Dyfnwal, King of Strathclyde, but they were defeated at the battle of Brunanburh. In 943, Constantine abdicated the throne and retired to the Céli Dé (Culdee) monastery of St Andrews where he died in 952. He was succeeded by his predecessor's son Malcolm I. Constantine's reign of 43 years, exceeded in Scotland only by that of King William the Lion before the Union of the Crowns in 1603, is believed to have played a defining part in the Gaelicisation of Pictland, in which his patronage of the Irish Céli Dé monastic reformers was a significant factor. During his reign, the words "Scots" and "Scotland" are first used to mean part of what is now Scotland. The earliest evidence for the ecclesiastical and administrative institutions which would last until the Davidian Revolution also appears at this time. | 2001-11-25T23:59:56Z | 2023-11-03T10:49:44Z | [
"Template:Reflist",
"Template:S-aft",
"Template:English, Scottish and British monarchs",
"Template:Main",
"Template:Anchor",
"Template:Refbegin",
"Template:PASE",
"Template:Use British English",
"Template:Infobox royalty",
"Template:S-ttl",
"Template:Featured article",
"Template:Lang",
"Template:See also",
"Template:S-reg",
"Template:Cite AU1",
"Template:S-end",
"Template:Pictish and Scottish Monarchs",
"Template:Lang-ang",
"Template:Cite web",
"Template:Citation",
"Template:S-start",
"Template:Nowrap",
"Template:S-bef",
"Template:Authority control",
"Template:Short description",
"Template:Use dmy dates",
"Template:S-hou",
"Template:Circa",
"Template:Refend"
] | https://en.wikipedia.org/wiki/Constantine_II_of_Scotland |
7,236 | Constantine the Great | Constantine I (27 February c. 272 – 22 May 337), also known as Constantine the Great, was a Roman emperor from AD 306 to 337. He was also the first emperor to convert to Christianity. Born in Naissus, Dacia Mediterranea (now Niš, Serbia), he was the son of Flavius Constantius, a Roman army officer of Illyrian origin who had been one of the four rulers of the Tetrarchy. His mother, Helena, was a Greek woman of low birth and a Christian. Later canonised as a saint, she is traditionally attributed to the conversion of her son. Constantine served with distinction under the Roman emperors Diocletian and Galerius. He began his career by campaigning in the eastern provinces (against the Persians) before being recalled in the west (in AD 305) to fight alongside his father in the province of Britannia. After his father's death in 306, Constantine was acclaimed as augustus (emperor) by his army at Eboracum (York, England). He eventually emerged victorious in the civil wars against emperors Maxentius and Licinius to become the sole ruler of the Roman Empire by 324.
Upon his ascension, Constantine enacted numerous reforms to strengthen the empire. He restructured the government, separating civil and military authorities. To combat inflation, he introduced the solidus, a new gold coin that became the standard for Byzantine and European currencies for more than a thousand years. The Roman army was reorganised to consist of mobile units (comitatenses), often around the Emperor, to serve on campaigns against external enemies or Roman rebels, and frontier-garrison troops (limitanei) which were capable of countering barbarian raids, but less and less capable, over time, of countering full-scale barbarian invasions. Constantine pursued successful campaigns against the tribes on the Roman frontiers—such as the Franks, the Alemanni, the Goths, and the Sarmatians—and resettled territories abandoned by his predecessors during the Crisis of the Third Century with citizens of Roman culture.
Although Constantine lived much of his life as a pagan and later as a catechumen, he began to favour Christianity beginning in 312, finally becoming a Christian and being baptised by either Eusebius of Nicomedia, an Arian bishop, or by Pope Sylvester I, which is maintained by the Catholic Church and the Coptic Orthodox Church. He played an influential role in the proclamation of the Edict of Milan in 313, which declared tolerance for Christianity in the Roman Empire. He convoked the First Council of Nicaea in 325 which produced the statement of Christian belief known as the Nicene Creed. The Church of the Holy Sepulchre was built on his orders at the purported site of Jesus' tomb in Jerusalem and was deemed the holiest place in all of Christendom. The papal claim to temporal power in the High Middle Ages was based on the fabricated Donation of Constantine. He has historically been referred to as the "First Christian Emperor," but while he did favour the Christian Church, some modern scholars debate his beliefs and even his comprehension of Christianity. Nevertheless, he is venerated as a saint in Eastern Christianity, and he did much to push Christianity towards the mainstream of Roman culture.
The age of Constantine marked a distinct epoch in the history of the Roman Empire and a pivotal moment in the transition from classical antiquity to the Middle Ages. He built a new imperial residence in the city of Byzantium and renamed it New Rome, later adopting the name Constantinople after himself, where it was located in modern Istanbul. It subsequently became the capital of the empire for more than a thousand years, the later Eastern Roman Empire often being referred to in English as the Byzantine Empire, a term never used by the Empire, invented by German historian Hieronymus Wolf. His more immediate political legacy was that he replaced Diocletian's Tetrarchy with the de facto principle of dynastic succession by leaving the empire to his sons and other members of the Constantinian dynasty. His reputation flourished during the lifetime of his children and for centuries after his reign. The medieval church held him up as a paragon of virtue, while secular rulers invoked him as a prototype, a point of reference, and the symbol of imperial legitimacy and identity. At the beginning of the Renaissance, there were more critical appraisals of his reign with the rediscovery of anti-Constantinian sources. Trends in modern and recent scholarship have attempted to balance the extremes of previous scholarship.
Constantine was a ruler of major importance and has always been a controversial figure. The fluctuations in his reputation reflect the nature of the ancient sources for his reign. These are abundant and detailed, but they have been strongly influenced by the official propaganda of the period and are often one-sided; no contemporaneous histories or biographies dealing with his life and rule have survived. The nearest replacement is Eusebius's Vita Constantini—a mixture of eulogy and hagiography written between 335 and circa 339—that extols Constantine's moral and religious virtues. The Vita creates a contentiously positive image of Constantine, and modern historians have frequently challenged its reliability. The fullest secular life of Constantine is the anonymous Origo Constantini, a work of uncertain date which focuses on military and political events to the neglect of cultural and religious matters.
Lactantius' De mortibus persecutorum, a political Christian pamphlet on the reigns of Diocletian and the Tetrarchy, provides valuable but tendentious detail on Constantine's predecessors and early life. The ecclesiastical histories of Socrates, Sozomen, and Theodoret describe the ecclesiastic disputes of Constantine's later reign. Written during the reign of Theodosius II (r. 402–450), a century after Constantine's reign, these ecclesiastical historians obscure the events and theologies of the Constantinian period through misdirection, misrepresentation, and deliberate obscurity. The contemporary writings of the orthodox Christian Athanasius and the ecclesiastical history of the Arian Philostorgius also survive, though their biases are no less firm.
The epitomes of Aurelius Victor (De Caesaribus), Eutropius (Breviarium), Festus (Breviarium), and the anonymous author of the Epitome de Caesaribus offer compressed secular political and military histories of the period. Although not Christian, the epitomes paint a favourable image of Constantine but omit reference to Constantine's religious policies. The Panegyrici Latini, a collection of panegyrics from the late 3rd and early 4th centuries, provides valuable information on the politics and ideology of the tetrarchic period and the early life of Constantine. Contemporary architecture—such as the Arch of Constantine in Rome and palaces in Gamzigrad and Córdoba—epigraphic remains, and the coinage of the era complement the literary sources.
Constantine was born in Naissus (today Niš, Serbia), part of the Dardania province of Moesia on 27 February, c. AD 272. His father was Flavius Constantius an Illyrian who was born in the same region (then called Dacia Ripensis) and a native of the province of Moesia. His original full name, as well as that of his father, is not known. His praenomen is variously given as Lucius, Marcus and Gaius. Whatever the case, praenomina had already disappeared from most public records by this time. He also adopted the name "Valerius", the nomen of emperor Diocletian, following his father's ascension as caesar.
Constantine probably spent little time with his father who was an officer in the Roman army, part of Emperor Aurelian's imperial bodyguard. Being described as a tolerant and politically skilled man, Constantius advanced through the ranks, earning the governorship of Dalmatia from Emperor Diocletian, another of Aurelian's companions from Illyricum, in 284 or 285. Constantine's mother was Helena, a Greek woman of low social standing from Helenopolis of Bithynia. It is uncertain whether she was legally married to Constantius or merely his concubine. His main language was Latin, and during his public speeches he needed Greek translators.
In July 285, Diocletian declared Maximian, another colleague from Illyricum, his co-emperor. Each emperor would have his own court, his own military and administrative faculties, and each would rule with a separate praetorian prefect as chief lieutenant. Maximian ruled in the West, from his capitals at Mediolanum (Milan, Italy) or Augusta Treverorum (Trier, Germany), while Diocletian ruled in the East, from Nicomedia (İzmit, Turkey). The division was merely pragmatic: the empire was called "indivisible" in official panegyric, and both emperors could move freely throughout the empire. In 288, Maximian appointed Constantius to serve as his praetorian prefect in Gaul. Constantius left Helena to marry Maximian's stepdaughter Theodora in 288 or 289.
Diocletian divided the empire again in 293, appointing two caesars to rule over further subdivisions of East and West. Each would be subordinate to his respective augustus but would act with supreme authority in his assigned lands. This system would later be called the Tetrarchy. Diocletian's first appointee for the office of Caesar was Constantius; his second was Galerius, a native of Felix Romuliana. According to Lactantius, Galerius was a brutal, animalistic man. Although he shared the paganism of Rome's aristocracy, he seemed to them an alien figure, a semi-barbarian. On 1 March, Constantius was promoted to the office of Caesar, and dispatched to Gaul to fight the rebels Carausius and Allectus. In spite of meritocratic overtones, the Tetrarchy retained vestiges of hereditary privilege, and Constantine became the prime candidate for future appointment as Caesar as soon as his father took the position. Constantine went to the court of Diocletian, where he lived as his father's heir presumptive.
Constantine received a formal education at Diocletian's court, where he learned Latin literature, Greek, and philosophy. The cultural environment in Nicomedia was open, fluid, and socially mobile; in it, Constantine could mix with intellectuals both pagan and Christian. He may have attended the lectures of Lactantius, a Christian scholar of Latin in the city. Because Diocletian did not completely trust Constantius—none of the Tetrarchs fully trusted their colleagues—Constantine was held as something of a hostage, a tool to ensure Constantius' best behavior. Constantine was nonetheless a prominent member of the court: he fought for Diocletian and Galerius in Asia and served in a variety of tribunates; he campaigned against barbarians on the Danube in 296 and fought the Persians under Diocletian in Syria in 297, as well as under Galerius in Mesopotamia in 298–299. By late 305, he had become a tribune of the first order, a tribunus ordinis primi.
Constantine had returned to Nicomedia from the eastern front by the spring of 303, in time to witness the beginnings of Diocletian's "Great Persecution", the most severe persecution of Christians in Roman history. In late 302AD, Diocletian and Galerius sent a messenger to the oracle of Apollo at Didyma with an inquiry about Christians. Constantine could recall his presence at the palace when the messenger returned when Diocletian accepted his court's demands for universal persecution. On 23 February 303, Diocletian ordered the destruction of Nicomedia's new church, condemned its scriptures to the flames, and had its treasures seized. In the months that followed, churches and scriptures were destroyed, Christians were deprived of official ranks, and priests were imprisoned. It is unlikely that Constantine played any role in the persecution. In his later writings, he attempted to present himself as an opponent of Diocletian's "sanguinary edicts" against the "Worshippers of God", but nothing indicates that he opposed it effectively at the time. Although no contemporary Christian challenged Constantine for his inaction during the persecutions, it remained a political liability throughout his life.
On 1 May 305, Diocletian, as a result of a debilitating sickness taken in the winter of 304–305, announced his resignation. In a parallel ceremony in Milan, Maximian did the same. Lactantius states that Galerius manipulated the weakened Diocletian into resigning and forced him to accept Galerius' allies in the imperial succession. According to Lactantius, the crowd listening to Diocletian's resignation speech believed, until the last moment, that Diocletian would choose Constantine and Maxentius (Maximian's son) as his successors. It was not to be: Constantius and Galerius were promoted to augusti, while Severus and Maximinus, Galerius' nephew, were appointed their caesars respectively. Constantine and Maxentius were ignored.
Some of the ancient sources detail plots that Galerius made on Constantine's life in the months following Diocletian's abdication. They assert that Galerius assigned Constantine to lead an advance unit in a cavalry charge through a swamp on the middle Danube, made him enter into single combat with a lion, and attempted to kill him in hunts and wars. Constantine always emerged victorious: the lion emerged from the contest in a poorer condition than Constantine; Constantine returned to Nicomedia from the Danube with a Sarmatian captive to drop at Galerius' feet. It is uncertain how much these tales can be trusted.
Constantine recognised the implicit danger in remaining at Galerius' court, where he was held as a virtual hostage. His career depended on being rescued by his father in the West. Constantius was quick to intervene. In the late spring or early summer of 305, Constantius requested leave for his son to help him campaign in Britain. After a long evening of drinking, Galerius granted the request. Constantine's later propaganda describes how he fled the court in the night, before Galerius could change his mind. He rode from post-house to post-house at high speed, hamstringing every horse in his wake. By the time Galerius awoke the following morning, Constantine had fled too far to be caught. Constantine joined his father in Gaul, at Bononia (Boulogne) before the summer of 305.
From Bononia, they crossed the English Channel to Britain and made their way to Eboracum (York), capital of the province of Britannia Secunda and home to a large military base. Constantine was able to spend a year in northern Britain at his father's side, campaigning against the Picts beyond Hadrian's Wall in the summer and autumn. Constantius' campaign, like that of Septimius Severus before it, probably advanced far into the north without achieving great success. Constantius had become severely sick over the course of his reign and died on 25 July 306 in Eboracum. Before dying, he declared his support for raising Constantine to the rank of full augustus. The Alamannic king Chrocus, a barbarian taken into service under Constantius, then proclaimed Constantine as augustus. The troops loyal to Constantius' memory followed him in acclamation. Gaul and Britain quickly accepted his rule; Hispania, which had been in his father's domain for less than a year, rejected it.
Constantine sent Galerius an official notice of Constantius' death and his own acclamation. Along with the notice, he included a portrait of himself in the robes of an augustus. The portrait was wreathed in bay. He requested recognition as heir to his father's throne and passed off responsibility for his unlawful ascension on his army, claiming they had "forced it upon him". Galerius was put into a fury by the message; he almost set the portrait and messenger on fire. His advisers calmed him and argued that outright denial of Constantine's claims would mean certain war. Galerius was compelled to compromise: he granted Constantine the title "caesar" rather than "augustus" (the latter office went to Severus instead). Wishing to make it clear that he alone gave Constantine legitimacy, Galerius personally sent Constantine the emperor's traditional purple robes. Constantine accepted the decision, knowing that it would remove doubts as to his legitimacy.
Constantine's share of the empire consisted of Britain, Gaul, and Spain, and he commanded one of the largest Roman armies which was stationed along the important Rhine frontier. He remained in Britain after his promotion to emperor, driving back the tribes of the Picts and securing his control in the northwestern dioceses. He completed the reconstruction of military bases begun under his father's rule, and he ordered the repair of the region's roadways. He then left for Augusta Treverorum (Trier) in Gaul, the Tetrarchic capital of the northwestern Roman Empire. The Franks learned of Constantine's acclamation and invaded Gaul across the lower Rhine over the winter of 306–307. He drove them back beyond the Rhine and captured kings Ascaric and Merogais; the kings and their soldiers were fed to the beasts of Trier's amphitheatre in the adventus (arrival) celebrations which followed.
Constantine began a major expansion of Trier. He strengthened the circuit wall around the city with military towers and fortified gates, and he began building a palace complex in the northeastern part of the city. To the south of his palace, he ordered the construction of a large formal audience hall and a massive imperial bathhouse. He sponsored many building projects throughout Gaul during his tenure as emperor of the West, especially in Augustodunum (Autun) and Arelate (Arles). According to Lactantius, Constantine followed a tolerant policy towards Christianity, although he was not yet a Christian. He probably judged it a more sensible policy than open persecution and a way to distinguish himself from the "great persecutor" Galerius. He decreed a formal end to persecution and returned to Christians all that they had lost during them.
Constantine was largely untried and had a hint of illegitimacy about him; he relied on his father's reputation in his early propaganda, which gave as much coverage to his father's deeds as to his. His military skill and building projects, however, soon gave the panegyrist the opportunity to comment favourably on the similarities between father and son, and Eusebius remarked that Constantine was a "renewal, as it were, in his own person, of his father's life and reign". Constantinian coinage, sculpture, and oratory also show a tendency for disdain towards the "barbarians" beyond the frontiers. He minted a coin issue after his victory over the Alemanni which depicts weeping and begging Alemannic tribesmen, "the Alemanni conquered" beneath the phrase "Romans' rejoicing". There was little sympathy for these enemies; as his panegyrist declared, "It is a stupid clemency that spares the conquered foe."
Following Galerius' recognition of Constantine as caesar, Constantine's portrait was brought to Rome, as was customary. Maxentius mocked the portrait's subject as the son of a harlot and lamented his own powerlessness. Maxentius, envious of Constantine's authority, seized the title of emperor on 28 October 306. Galerius refused to recognize him but failed to unseat him. Galerius sent Severus against Maxentius, but during the campaign, Severus' armies, previously under command of Maxentius' father Maximian, defected, and Severus was seized and imprisoned. Maximian, brought out of retirement by his son's rebellion, left for Gaul to confer with Constantine in late 307. He offered to marry his daughter Fausta to Constantine and elevate him to augustan rank. In return, Constantine would reaffirm the old family alliance between Maximian and Constantius and offer support to Maxentius' cause in Italy. Constantine accepted and married Fausta in Trier in late summer 307. Constantine gave Maxentius his meagre support, offering Maxentius political recognition.
Constantine remained aloof from the Italian conflict, however. Over the spring and summer of 307, he had left Gaul for Britain to avoid any involvement in the Italian turmoil; now, instead of giving Maxentius military aid, he sent his troops against Germanic tribes along the Rhine. In 308, he raided the territory of the Bructeri and made a bridge across the Rhine at Colonia Agrippinensium (Cologne). In 310, he marched to the northern Rhine and fought the Franks. When not campaigning, he toured his lands advertising his benevolence and supporting the economy and the arts. His refusal to participate in the war increased his popularity among his people and strengthened his power base in the West. Maximian returned to Rome in the winter of 307–308 but soon fell out with his son. In early 308, after a failed attempt to usurp Maxentius' title, Maximian returned to Constantine's court.
On 11 November 308, Galerius called a general council at the military city of Carnuntum (Petronell-Carnuntum, Austria) to resolve the instability in the western provinces. In attendance were Diocletian, briefly returned from retirement, Galerius, and Maximian. Maximian was forced to abdicate again and Constantine was again demoted to caesar. Licinius, one of Galerius' old military companions, was appointed augustus in the western regions. The new system did not last long: Constantine refused to accept the demotion and continued to style himself as augustus on his coinage, even as other members of the Tetrarchy referred to him as a caesar on theirs. Maximinus was frustrated that he had been passed over for promotion while the newcomer Licinius had been raised to the office of augustus and demanded that Galerius promote him. Galerius offered to call both Maximinus and Constantine "sons of the augusti", but neither accepted the new title. By the spring of 310, Galerius was referring to both men as augusti.
In 310, a dispossessed Maximian rebelled against Constantine while Constantine was away campaigning against the Franks. Maximian had been sent south to Arles with a contingent of Constantine's army, in preparation for any attacks by Maxentius in southern Gaul. He announced that Constantine was dead and took up the imperial purple. In spite of a large donative pledge to any who would support him as emperor, most of Constantine's army remained loyal to their emperor, and Maximian was soon compelled to leave. When Constantine heard of the rebellion, he abandoned his campaign against the Franks and marched his army up the Rhine. At Cabillunum (Chalon-sur-Saône), he moved his troops onto waiting boats to row down the slow waters of the Saône to the quicker waters of the Rhone. He disembarked at Lugdunum (Lyon). Maximian fled to Massilia (Marseille), a town better able to withstand a long siege than Arles. It made little difference, however, as loyal citizens opened the rear gates to Constantine. Maximian was captured and reproved for his crimes. Constantine granted some clemency but strongly encouraged his suicide. In July 310, Maximian hanged himself.
In spite of the earlier rupture in their relations, Maxentius was eager to present himself as his father's devoted son after his death. He began minting coins with his father's deified image, proclaiming his desire to avenge Maximian's death. Constantine initially presented the suicide as an unfortunate family tragedy. By 311, however, he was spreading another version. According to this, after Constantine had pardoned him, Maximian planned to murder Constantine in his sleep. Fausta learned of the plot and warned Constantine, who put a eunuch in his own place in bed. Maximian was apprehended when he killed the eunuch and was offered suicide, which he accepted. Along with using propaganda, Constantine instituted a damnatio memoriae on Maximian, destroying all inscriptions referring to him and eliminating any public work bearing his image.
The death of Maximian required a shift in Constantine's public image. He could no longer rely on his connection to the elder Emperor Maximian and needed a new source of legitimacy. In a speech delivered in Gaul on 25 July 310, the anonymous orator reveals a previously unknown dynastic connection to Claudius II, a 3rd-century emperor famed for defeating the Goths and restoring order to the empire. Breaking away from tetrarchic models, the speech emphasizes Constantine's ancestral prerogative to rule, rather than principles of imperial equality. The new ideology expressed in the speech made Galerius and Maximian irrelevant to Constantine's right to rule. Indeed, the orator emphasizes ancestry to the exclusion of all other factors: "No chance agreement of men, nor some unexpected consequence of favour, made you emperor," the orator declares to Constantine.
The oration also moves away from the religious ideology of the Tetrarchy, with its focus on twin dynasties of Jupiter and Hercules. Instead, the orator proclaims that Constantine experienced a divine vision of Apollo and Victory granting him laurel wreaths of health and a long reign. In the likeness of Apollo, Constantine recognised himself as the saving figure to whom would be granted "rule of the whole world", as the poet Virgil had once foretold. The oration's religious shift is paralleled by a similar shift in Constantine's coinage. In his early reign, the coinage of Constantine advertised Mars as his patron. From 310 on, Mars was replaced by Sol Invictus, a god conventionally identified with Apollo. There is little reason to believe that either the dynastic connection or the divine vision are anything other than fiction, but their proclamation strengthened Constantine's claims to legitimacy and increased his popularity among the citizens of Gaul.
By the middle of 310, Galerius had become too ill to involve himself in imperial politics. His final act survives: a letter to provincials posted in Nicomedia on 30 April 311, proclaiming an end to the persecutions, and the resumption of religious toleration.
Eusebius maintains "divine providence […] took action against the perpetrator of these crimes" and gives a graphic account of Galerius' demise:
"Without warning suppurative inflammation broke out round the middle of his genitals, then a deep-seated fistula ulcer; these ate their way incurably into his innermost bowels. From them came a teeming indescribable mass of worms, and a sickening smell was given off, for the whole of his hulking body, thanks to over eating, had been transformed even before his illness into a huge lump of flabby fat, which then decomposed and presented those who came near it with a revolting and horrifying sight."
Galerius died soon after the edict's proclamation, destroying what little remained of the Tetrarchy. Maximinus mobilised against Licinius and seized Asia Minor. A hasty peace was signed on a boat in the middle of the Bosphorus. While Constantine toured Britain and Gaul, Maxentius prepared for war. He fortified northern Italy and strengthened his support in the Christian community by allowing it to elect Eusebius as bishop of Rome.
Maxentius' rule was nevertheless insecure. His early support dissolved in the wake of heightened tax rates and depressed trade; riots broke out in Rome and Carthage; and Domitius Alexander was able to briefly usurp his authority in Africa. By 312, he was a man barely tolerated, not one actively supported, even among Christian Italians. In the summer of 311, Maxentius mobilised against Constantine while Licinius was occupied with affairs in the East. He declared war on Constantine, vowing to avenge his father's "murder". To prevent Maxentius from forming an alliance against him with Licinius, Constantine forged his own alliance with Licinius over the winter of 311–312 and offered him his sister Constantia in marriage. Maximinus considered Constantine's arrangement with Licinius an affront to his authority. In response, he sent ambassadors to Rome, offering political recognition to Maxentius in exchange for a military support, which Maxentius accepted. According to Eusebius, inter-regional travel became impossible, and there was military buildup everywhere. There was "not a place where people were not expecting the onset of hostilities every day".
Constantine's advisers and generals cautioned against preemptive attack on Maxentius; even his soothsayers recommended against it, stating that the sacrifices had produced unfavourable omens. Constantine, with a spirit that left a deep impression on his followers, inspiring some to believe that he had some form of supernatural guidance, ignored all these cautions. Early in the spring of 312, Constantine crossed the Cottian Alps with a quarter of his army, a force numbering about 40,000. The first town his army encountered was Segusium (Susa, Italy), a heavily fortified town that shut its gates to him. Constantine ordered his men to set fire to its gates and scale its walls. He took the town quickly. Constantine ordered his troops not to loot the town and advanced into northern Italy.
At the approach to the west of the important city of Augusta Taurinorum (Turin, Italy), Constantine met a large force of heavily armed Maxentian cavalry. In the ensuing Battle of Turin Constantine's army encircled Maxentius' cavalry, flanked them with his own cavalry, and dismounted them with blows from his soldiers' iron-tipped clubs. Constantine's armies emerged victorious. Turin refused to give refuge to Maxentius' retreating forces, opening its gates to Constantine instead. Other cities of the north Italian plain sent Constantine embassies of congratulation for his victory. He moved on to Milan, where he was met with open gates and jubilant rejoicing. Constantine rested his army in Milan until mid-summer 312, when he moved on to Brixia (Brescia).
Brescia's army was easily dispersed, and Constantine quickly advanced to Verona where a large Maxentian force was camped. Ruricius Pompeianus, general of the Veronese forces and Maxentius' praetorian prefect, was in a strong defensive position since the town was surrounded on three sides by the Adige. Constantine sent a small force north of the town in an attempt to cross the river unnoticed. Ruricius sent a large detachment to counter Constantine's expeditionary force but was defeated. Constantine's forces successfully surrounded the town and laid siege. Ruricius gave Constantine the slip and returned with a larger force to oppose Constantine. Constantine refused to let up on the siege and sent only a small force to oppose him. In the desperately fought encounter that followed, Ruricius was killed and his army destroyed. Verona surrendered soon afterwards, followed by Aquileia, Mutina (Modena), and Ravenna. The road to Rome was now wide open to Constantine.
Maxentius prepared for the same type of war he had waged against Severus and Galerius: he sat in Rome and prepared for a siege. He still controlled Rome's Praetorian Guard, was well-stocked with African grain, and was surrounded on all sides by the seemingly impregnable Aurelian Walls. He ordered all bridges across the Tiber cut, reportedly on the counsel of the gods, and left the rest of central Italy undefended; Constantine secured that region's support without challenge. Constantine progressed slowly along the Via Flaminia, allowing the weakness of Maxentius to draw his regime further into turmoil. Maxentius' support continued to weaken: at chariot races on 27 October, the crowd openly taunted Maxentius, shouting that Constantine was invincible. Maxentius, no longer certain that he would emerge from a siege victorious, built a temporary boat bridge across the Tiber in preparation for a field battle against Constantine. On 28 October 312, the sixth anniversary of his reign, he approached the keepers of the Sibylline Books for guidance. The keepers prophesied that, on that very day, "the enemy of the Romans" would die. Maxentius advanced north to meet Constantine in battle.
Maxentius' forces were still twice the size of Constantine's, and he organised them in long lines facing the battle plain with their backs to the river. Constantine's army arrived on the field bearing unfamiliar symbols on their standards and their shields. According to Lactantius "Constantine was directed in a dream to cause the heavenly sign to be delineated on the shields of his soldiers, and so to proceed to battle. He did as he had been commanded, and he marked on their shields the letter Χ, with a perpendicular line drawn through it and turned round thus at the top, being the cipher of Christ. Having this sign (☧), his troops stood to arms." Eusebius describes a vision that Constantine had while marching at midday in which "he saw with his own eyes the trophy of a cross of light in the heavens, above the sun, and bearing the inscription, In Hoc Signo Vinces" ("In this sign thou shalt conquer"). In Eusebius's account, Constantine had a dream the following night in which Christ appeared with the same heavenly sign and told him to make an army standard in the form of the labarum. Eusebius is vague about when and where these events took place, but it enters his narrative before the war begins against Maxentius. He describes the sign as Chi (Χ) traversed by Rho (Ρ) to form ☧, representing the first two letters of the Greek word ΧΡΙΣΤΟΣ (Christos). A medallion was issued at Ticinum in 315 which shows Constantine wearing a helmet emblazoned with the Chi Rho, and coins issued at Siscia in 317/318 repeat the image. The figure was otherwise rare and is uncommon in imperial iconography and propaganda before the 320s. It was not completely unknown, however, being an abbreviation of the Greek word chrēston (good), having previously appeared on the coins of Ptolemy III Euergetes in the 3rd century BC. Following Constantine, centuries of Christians invoked the miraculous or the supernatural when justifying or describing their warfare.
Constantine deployed his own forces along the whole length of Maxentius' line. He ordered his cavalry to charge, and they broke Maxentius' cavalry. He then sent his infantry against Maxentius' infantry, pushing many into the Tiber where they were slaughtered and drowned. The battle was brief, and Maxentius' troops were broken before the first charge. His horse guards and praetorians initially held their position, but they broke under the force of a Constantinian cavalry charge; they also broke ranks and fled to the river. Maxentius rode with them and attempted to cross the bridge of boats (Ponte Milvio), but he was pushed into the Tiber and drowned by the mass of his fleeing soldiers.
Constantine entered Rome on 29 October 312 and staged a grand adventus in the city which was met with jubilation. Maxentius' body was fished out of the Tiber and decapitated, and his head was paraded through the streets for all to see. After the ceremonies, the disembodied head was sent to Carthage, and Carthage offered no further resistance. Unlike his predecessors, Constantine neglected to make the trip to the Capitoline Hill and perform customary sacrifices at the Temple of Jupiter. However, he did visit the Senatorial Curia Julia, and he promised to restore its ancestral privileges and give it a secure role in his reformed government; there would be no revenge against Maxentius' supporters. In response, the Senate decreed him "title of the first name", which meant that his name would be listed first in all official documents, and they acclaimed him as "the greatest augustus". He issued decrees returning property that was lost under Maxentius, recalling political exiles, and releasing Maxentius' imprisoned opponents.
An extensive propaganda campaign followed, during which Maxentius' image was purged from all public places. He was written up as a "tyrant" and set against an idealised image of Constantine the "liberator". Eusebius is the best representative of this strand of Constantinian propaganda. Maxentius' rescripts were declared invalid, and the honours that he had granted to leaders of the Senate were also invalidated. Constantine also attempted to remove Maxentius' influence on Rome's urban landscape. All structures built by him were rededicated to Constantine, including the Temple of Romulus and the Basilica of Maxentius. At the focal point of the basilica, a stone statue was erected of Constantine holding the Christian labarum in its hand. Its inscription bore the message which the statue illustrated: "By this sign, Constantine had freed Rome from the yoke of the tyrant."
Constantine also sought to upstage Maxentius' achievements. For example, the Circus Maximus was redeveloped so that its seating capacity was 25 times larger than that of Maxentius' racing complex on the Via Appia. Maxentius' strongest military supporters were neutralised when he disbanded the Praetorian Guard and Imperial Horse Guard. The tombstones of the Imperial Horse Guard were ground up and used in a basilica on the Via Labicana, and their former base was redeveloped into the Lateran Basilica on 9 November 312—barely two weeks after Constantine captured the city. The Legio II Parthica was removed from Albano Laziale, and the remainder of Maxentius' armies were sent to do frontier duty on the Rhine.
In the following years, Constantine gradually consolidated his military superiority over his rivals in the crumbling Tetrarchy. In 313, he met Licinius in Milan to secure their alliance by the marriage of Licinius and Constantine's half-sister Constantia. During this meeting, the emperors agreed on the so-called Edict of Milan, officially granting full tolerance to Christianity and all religions in the empire. The document had special benefits for Christians, legalizing their religion and granting them restoration for all property seized during Diocletian's persecution. It repudiates past methods of religious coercion and used only general terms to refer to the divine sphere—"Divinity" and "Supreme Divinity", summa divinitas. The conference was cut short, however, when news reached Licinius that his rival Maximinus had crossed the Bosporus and invaded European territory. Licinius departed and eventually defeated Maximinus, gaining control over the entire eastern half of the Roman Empire. Relations between the two remaining emperors deteriorated, as Constantine suffered an assassination attempt at the hands of a character that Licinius wanted elevated to the rank of Caesar; Licinius, for his part, had Constantine's statues in Emona destroyed. In either 314 or 316, the two augusti fought against one another at the Battle of Cibalae, with Constantine being victorious. They clashed again at the Battle of Mardia in 317 and agreed to a settlement in which Constantine's sons Crispus and Constantine II, and Licinius' son Licinianus were made caesars. After this arrangement, Constantine ruled the dioceses of Pannonia and Macedonia and took residence at Sirmium, whence he could wage war on the Goths and Sarmatians in 322, and on the Goths in 323, defeating and killing their leader Rausimod.
In 320, Licinius allegedly reneged on the religious freedom promised by the Edict of Milan and began to oppress Christians anew, generally without bloodshed, but resorting to confiscations and sacking of Christian office-holders. Although this characterization of Licinius as anti-Christian is somewhat doubtful, the fact is that he seems to have been far less open in his support of Christianity than Constantine. Therefore, Licinius was prone to see the Church as a force more loyal to Constantine than to the Imperial system in general, as the explanation offered by the Church historian Sozomen.
This dubious arrangement eventually became a challenge to Constantine in the West, climaxing in the great civil war of 324. Constantine's Christian eulogists present the war as a battle between Christianity and paganism; Licinius, aided by Gothic mercenaries, represented the past and ancient paganism, while Constantine and his Franks marched under the standard of the labarum. Outnumbered but fired by their zeal, Constantine's army emerged victorious in the Battle of Adrianople. Licinius fled across the Bosphorus and appointed Martinian, his magister officiorum, as nominal augustus in the West, but Constantine next won the Battle of the Hellespont and finally the Battle of Chrysopolis on 18 September 324. Licinius and Martinian surrendered to Constantine at Nicomedia on the promise their lives would be spared: they were sent to live as private citizens in Thessalonica and Cappadocia respectively, but in 325 Constantine accused Licinius of plotting against him and had them both arrested and hanged; Licinius' son (the son of Constantine's half-sister) was killed in 326. Thus Constantine became the sole emperor of the Roman Empire.
Diocletian had chosen Nicomedia in the East as his capital during the Tetrarchy—not far from Byzantium, well situated to defend Thrace, Asia, and Egypt, all of which had required his military attention. Constantine had recognised the shift of the empire from the remote and depopulated West to the richer cities of the East, and the military strategic importance of protecting the Danube from barbarian excursions and Asia from a hostile Persia in choosing his new capital as well as being able to monitor shipping traffic between the Black Sea and the Mediterranean. Licinius' defeat came to represent the defeat of a rival centre of pagan and Greek-speaking political activity in the East, as opposed to the Christian and Latin-speaking Rome, and it was proposed that a new Eastern capital should represent the integration of the East into the Roman Empire as a whole, as a centre of learning, prosperity, and cultural preservation for the whole of the Eastern Roman Empire. Among the various locations proposed for this alternative capital, Constantine appears to have toyed earlier with Serdica (present-day Sofia), as he was reported saying that "Serdica is my Rome". Sirmium and Thessalonica were also considered. Eventually, however, Constantine decided to work on the Greek city of Byzantium, which offered the advantage of having already been extensively rebuilt on Roman patterns of urbanism during the preceding century by Septimius Severus and Caracalla, who had already acknowledged its strategic importance. The city was thus founded in 324, dedicated on 11 May 330 and renamed Constantinopolis ("Constantine's City" or Constantinople in English). Special commemorative coins were issued in 330 to honor the event. The new city was protected by the relics of the True Cross, the Rod of Moses and other holy relics, though a cameo now at the Hermitage Museum also represented Constantine crowned by the tyche of the new city. The figures of old gods were either replaced or assimilated into a framework of Christian symbolism. Constantine built the new Church of the Holy Apostles on the site of a temple to Aphrodite. Generations later there was the story that a divine vision led Constantine to this spot, and an angel no one else could see led him on a circuit of the new walls. The capital would often be compared to the 'old' Rome as Nova Roma Constantinopolitana, the "New Rome of Constantinople".
Constantine was the first emperor to stop the persecution of Christians and to legalize Christianity, along with all other religions/cults in the Roman Empire. In February 313, he met with Licinius in Milan and developed the Edict of Milan, which stated that Christians should be allowed to follow their faith without oppression. This removed penalties for professing Christianity, under which many had been martyred previously, and it returned confiscated Church property. The edict protected all religions from persecution, not only Christianity, allowing anyone to worship any deity that they chose. A similar edict had been issued in 311 by Galerius, senior emperor of the Tetrarchy, which granted Christians the right to practise their religion but did not restore any property to them. The Edict of Milan included several clauses which stated that all confiscated churches would be returned, as well as other provisions for previously persecuted Christians. Scholars debate whether Constantine adopted his mother Helena's Christianity in his youth or whether he adopted it gradually over the course of his life.
Constantine possibly retained the title of pontifex maximus which emperors bore as heads of the ancient Roman religion until Gratian renounced the title. According to Christian writers, Constantine was over 40 when he finally declared himself a Christian, making it clear that he owed his successes to the protection of the Christian High God alone. Despite these declarations of being a Christian, he waited to be baptised on his deathbed, believing that the baptism would release him of any sins he committed in the course of carrying out his policies while emperor. He supported the Church financially, built basilicas, granted privileges to clergy (such as exemption from certain taxes), promoted Christians to high office, and returned property confiscated during the long period of persecution. His most famous building projects include the Church of the Holy Sepulchre and Old St. Peter's Basilica. In constructing the Old St. Peter's Basilica, Constantine went to great lengths to erect the basilica on top of St. Peter's resting place, so much so that it even affected the design of the basilica, including the challenge of erecting it on the hill where St. Peter rested, making its complete construction time over 30 years from the date Constantine ordered it to be built.
Constantine might not have patronised Christianity alone. A triumphal arch was built in 315 to celebrate his victory in the Battle of the Milvian Bridge which was decorated with images of the goddess Victoria, and sacrifices were made to pagan gods at its dedication, including Apollo, Diana, and Hercules. Absent from the arch are any depictions of Christian symbolism. However, the arch was commissioned by the Senate, so the absence of Christian symbols may reflect the role of the Curia at the time as a pagan redoubt.
In 321, he legislated that the venerable Sunday should be a day of rest for all citizens. In 323, he issued a decree banning Christians from participating in state sacrifices. After the pagan gods had disappeared from his coinage, Christian symbols appeared as Constantine's attributes, the chi rho between his hands or on his labarum, as well on the coinage. The reign of Constantine established a precedent for the emperor to have great influence and authority in the early Christian councils, most notably the dispute over Arianism. Constantine disliked the risks to societal stability that religious disputes and controversies brought with them, preferring to establish an orthodoxy. His influence over the Church councils was to enforce doctrine, root out heresy, and uphold ecclesiastical unity; the Church's role was to determine proper worship, doctrines, and dogma.
North African bishops struggled with Christian bishops who had been ordained by Donatus in opposition to Caecilian from 313 to 316. The African bishops could not come to terms, and the Donatists asked Constantine to act as a judge in the dispute. Three regional Church councils and another trial before Constantine all ruled against Donatus and the Donatism movement in North Africa. In 317, Constantine issued an edict to confiscate Donatist church property and to send Donatist clergy into exile. More significantly, in 325 he summoned the First Council of Nicaea, most known for its dealing with Arianism and for instituting the Nicene Creed. He enforced the council's prohibition against celebrating the Lord's Supper on the day before the Jewish Passover, which marked a definite break of Christianity from the Judaic tradition. From then on, the solar Julian calendar was given precedence over the lunisolar Hebrew calendar among the Christian churches of the Roman Empire.
Constantine made some new laws regarding the Jews; some of them were unfavourable towards Jews, although they were not harsher than those of his predecessors. It was made illegal for Jews to seek converts or to attack other Jews who had converted to Christianity. They were forbidden to own Christian slaves or to circumcise their slaves. On the other hand, Jewish clergy were given the same exemptions as Christian clergy.
Beginning in the mid-3rd century, the emperors began to favour members of the equestrian order over senators, who had a monopoly on the most important offices of the state. Senators were stripped of the command of legions and most provincial governorships, as it was felt that they lacked the specialised military upbringing needed in an age of acute defense needs; such posts were given to equestrians by Diocletian and his colleagues, following a practice enforced piecemeal by their predecessors. The emperors, however, still needed the talents and the help of the very rich, who were relied on to maintain social order and cohesion by means of a web of powerful influence and contacts at all levels. Exclusion of the old senatorial aristocracy threatened this arrangement.
In 326, Constantine reversed this pro-equestrian trend, raising many administrative positions to senatorial rank and thus opening these offices to the old aristocracy; at the same time, he elevated the rank of existing equestrian office-holders to senator, degrading the equestrian order in the process (at least as a bureaucratic rank). The title of perfectissimus was granted only to mid- or low-level officials by the end of the 4th century.
By the new Constantinian arrangement, one could become a senator by being elected praetor or by fulfilling a function of senatorial rank. From then on, holding actual power and social status were melded together into a joint imperial hierarchy. Constantine gained the support of the old nobility with this, as the Senate was allowed to elect praetors and quaestors in place of the usual practice of the emperors directly creating magistrates (adlectio). An inscription in honor of city prefect Ceionius Rufus Albinus states that Constantine had restored the Senate "the auctoritas it had lost at Caesar's time".
The Senate as a body remained devoid of any significant power; nevertheless, the senators had been marginalised as potential holders of imperial functions during the 3rd century but could dispute such positions alongside more upstart bureaucrats. Some modern historians see in those administrative reforms an attempt by Constantine at reintegrating the senatorial order into the imperial administrative elite to counter the possibility of alienating pagan senators from a Christianised imperial rule; however, such an interpretation remains conjectural, given the fact that we do not have the precise numbers about pre-Constantine conversions to Christianity in the old senatorial milieu. Some historians suggest that early conversions among the old aristocracy were more numerous than previously supposed.
Constantine's reforms had to do only with the civilian administration. The military chiefs had risen from the ranks since the Crisis of the Third Century but remained outside the Senate, in which they were included only by Constantine's children.
In the 3rd century, the production of fiat money to pay for public expenses resulted in runaway inflation, and Diocletian tried unsuccessfully to re-establish trustworthy minting of silver coins, as well as silver-bronze "billon" coins (the term "billon" meaning an alloy of precious and base metals that is mostly base metal). Silver currency was overvalued in terms of its actual metal content and therefore could only circulate at much discounted rates. Constantine stopped minting the Diocletianic "pure" silver argenteus soon after 305, while the "billon" currency continued to be used until the 360s. From the early 300s on, Constantine forsook any attempts at restoring the silver currency, preferring instead to concentrate on minting large quantities of the gold solidus, 72 of which made a pound of gold. New and highly debased silver pieces continued to be issued during his later reign and after his death, in a continuous process of retariffing, until this "billon" minting ceased in 367, and the silver piece was continued by various denominations of bronze coins, the most important being the centenionalis.
These bronze pieces continued to be devalued, assuring the possibility of keeping fiduciary minting alongside a gold standard. The author of De Rebus Bellicis held that the rift widened between classes because of this monetary policy; the rich benefited from the stability in purchasing power of the gold piece, while the poor had to cope with ever-degrading bronze pieces. Later emperors such as Julian the Apostate insisted on trustworthy mintings of the bronze currency.
Constantine's monetary policies were closely associated with his religious policies; increased minting was associated with the confiscation of all gold, silver, and bronze statues from pagan temples between 331 and 336 which were declared to be imperial property. Two imperial commissioners for each province had the task of getting the statues and melting them for immediate minting, with the exception of a number of bronze statues that were used as public monuments in Constantinople.
Constantine had his eldest son Crispus seized and put to death by "cold poison" at Pola (Pula, Croatia) sometime between 15 May and 17 June 326. In July, he had his wife Empress Fausta (stepmother of Crispus) killed in an overheated bath. Their names were wiped from the face of many inscriptions, references to their lives were eradicated from the literary record, and their memory was condemned. Eusebius, for example, edited out any praise of Crispus from later copies of Historia Ecclesiastica, and his Vita Constantini contains no mention of Fausta or Crispus. Few ancient sources are willing to discuss possible motives for the events, and the few that do are of later provenance and are generally unreliable. At the time of the executions, it was commonly believed that Empress Fausta was either in an illicit relationship with Crispus or was spreading rumors to that effect. A popular myth arose, modified to allude to the Hippolytus–Phaedra legend, with the suggestion that Constantine killed Crispus and Fausta for their immoralities; the largely fictional Passion of Artemius explicitly makes this connection. The myth rests on slim evidence as an interpretation of the executions; only late and unreliable sources allude to the relationship between Crispus and Fausta, and there is no evidence for the modern suggestion that Constantine's "godly" edicts of 326 and the irregularities of Crispus are somehow connected.
Although Constantine created his apparent heirs "caesars", following a pattern established by Diocletian, he gave his creations a hereditary character, alien to the tetrarchic system: Constantine's caesars were to be kept in the hope of ascending to empire and entirely subordinated to their augustus, as long as he was alive. Adrian Goldsworthy speculates an alternative explanation for the execution of Crispus was Constantine's desire to keep a firm grip on his prospective heirs, this—and Fausta's desire for having her sons inheriting instead of their half-brother—being reason enough for killing Crispus; the subsequent execution of Fausta, however, was probably meant as a reminder to her children that Constantine would not hesitate in "killing his own relatives when he felt this was necessary".
Constantine considered Constantinople his capital and permanent residence. He lived there for a good portion of his later life. In 328, construction was completed on Constantine's Bridge at Sucidava, (today Celei in Romania) in hopes of reconquering Dacia, a province that had been abandoned under Aurelian. In the late winter of 332, Constantine campaigned with the Sarmatians against the Goths. The weather and lack of food reportedly cost the Goths dearly before they submitted to Rome. In 334, after Sarmatian commoners had overthrown their leaders, Constantine led a campaign against the tribe. He won a victory in the war and extended his control over the region, as remains of camps and fortifications in the region indicate. Constantine resettled some Sarmatian exiles as farmers in Illyrian and Roman districts and conscripted the rest into the army. The new frontier in Dacia was along the Brazda lui Novac line supported by new castra. Constantine took the title Dacicus maximus in 336.
In the last years of his life, Constantine made plans for a campaign against Persia. In a letter written to the king of Persia, Shapur, Constantine had asserted his patronage over Persia's Christian subjects and urged Shapur to treat them well. The letter is undatable. In response to border raids, Constantine sent Constantius to guard the eastern frontier in 335. In 336, Prince Narseh invaded Armenia (a Christian kingdom since 301) and installed a Persian client on the throne. Constantine then resolved to campaign against Persia. He treated the war as a Christian crusade, calling for bishops to accompany the army and commissioning a tent in the shape of a church to follow him everywhere. Constantine planned to be baptised in the Jordan River before crossing into Persia. Persian diplomats came to Constantinople over the winter of 336–337, seeking peace, but Constantine turned them away. The campaign was called off, however, when Constantine became sick in the spring of 337.
From his recent illness, Constantine knew death would soon come. Within the Church of the Holy Apostles, Constantine had secretly prepared a final resting-place for himself. It came sooner than he had expected. Soon after the Feast of Easter 337, Constantine fell seriously ill. He left Constantinople for the hot baths near his mother's city of Helenopolis (Altınova), on the southern shores of the Gulf of Nicomedia (present-day Gulf of İzmit). There, in a church his mother built in honor of Lucian the Martyr, he prayed, and there he realised that he was dying. Seeking purification, he became a catechumen and attempted a return to Constantinople, making it only as far as a suburb of Nicomedia. He summoned the bishops and told them of his hope to be baptised in the River Jordan, where Christ was written to have been baptised. He requested the baptism right away, promising to live a more Christian life should he live through his illness. The bishops, Eusebius records, "performed the sacred ceremonies according to custom". He chose the Arianizing bishop Eusebius of Nicomedia, bishop of the city where he lay dying, as his baptizer. In postponing his baptism, he followed one custom at the time which postponed baptism until after infancy. It has been thought that Constantine put off baptism as long as he did so as to be absolved from as much of his sin as possible. Constantine died soon after at a suburban villa called Achyron, on the last day of the fifty-day festival of Pentecost directly following Pascha (or Easter), on 22 May 337.
Although Constantine's death follows the conclusion of the Persian campaign in Eusebius's account, most other sources report his death as occurring in its middle. Emperor Julian (a nephew of Constantine), writing in the mid-350s, observes that the Sassanians escaped punishment for their ill-deeds, because Constantine died "in the middle of his preparations for war". Similar accounts are given in the Origo Constantini, an anonymous document composed while Constantine was still living, which has Constantine dying in Nicomedia; the Historiae abbreviatae of Sextus Aurelius Victor, written in 361, which has Constantine dying at an estate near Nicomedia called Achyrona while marching against the Persians; and the Breviarium of Eutropius, a handbook compiled in 369 for the Emperor Valens, which has Constantine dying in a nameless state villa in Nicomedia. From these and other accounts, some have concluded that Eusebius's Vita was edited to defend Constantine's reputation against what Eusebius saw as a less congenial version of the campaign.
Following his death, his body was transferred to Constantinople and buried in the Church of the Holy Apostles, in a porphyry sarcophagus that was described in the 10th century by Constantine VII Porphyrogenitus in the De Ceremoniis. His body survived the plundering of the city during the Fourth Crusade in 1204 but was destroyed at some point afterwards. Constantine was succeeded by his three sons born of Fausta, Constantine II, Constantius II and Constans. His sons, along with his nephew Dalmatius, had already received one division of the empire each to administer as caesars; Constantine may have intended his successors to resume a structure akin to Diocletian's Tetrarchy. A number of relatives were killed by followers of Constantius, notably Constantine's nephews Dalmatius (who held the rank of caesar) and Hannibalianus, presumably to eliminate possible contenders to an already complicated succession. He also had two daughters, Constantina and Helena, wife of Emperor Julian.
Constantine reunited the empire under one emperor, and he won major victories over the Franks and Alamanni in 306–308, the Franks again in 313–314, the Goths in 332, and the Sarmatians in 334. By 336, he had reoccupied most of the long-lost province of Dacia which Aurelian had been forced to abandon in 271. At the time of his death, he was planning a great expedition to end raids on the eastern provinces from the Persian Empire.
In the cultural sphere, Constantine revived the clean-shaven face fashion of earlier emperors, originally introduced among the Romans by Scipio Africanus (236–183 BC) and changed into the wearing of the beard by Hadrian (r. 117–138). This new Roman imperial fashion lasted until the reign of Phocas (r. 602–610) in the 7th century.
The Holy Roman Empire reckoned Constantine among the venerable figures of its tradition. In the later Byzantine state, it became a great honor for an emperor to be hailed as a "new Constantine"; ten emperors carried the name, including the last emperor of the Eastern Roman Empire. Charlemagne used monumental Constantinian forms in his court to suggest that he was Constantine's successor and equal. Charlemagne, Henry VIII, Philip II of Spain, Godfrey of Bouillon, House of Capet, House of Habsburg, House of Stuart, Macedonian dynasty and Phokas family claimed descent from Constantine. Geoffrey of Monmouth embroidered a tale that the legendary king of Britain, King Arthur, was also a descendant of Constantine. Constantine acquired a mythic role as a hero and warrior against heathens. His reception as a saint seems to have spread within the Byzantine empire during wars against the Sasanian Persians and the Muslims in the late 6th and 7th century. The motif of the Romanesque equestrian, the mounted figure in the posture of a triumphant Roman emperor, became a visual metaphor in statuary in praise of local benefactors. The name "Constantine" enjoyed renewed popularity in western France in the 11th and 12th centuries.
The Niš Constantine the Great Airport is named in honor of him. A large cross was planned to be built on a hill overlooking Niš, but the project was cancelled. In 2012, a memorial was erected in Niš in his honor. The Commemoration of the Edict of Milan was held in Niš in 2013. The Orthodox Church considers Constantine a saint (Άγιος Κωνσταντίνος, Saint Constantine), having a feast day on 21 May, and calls him isapostolos (ισαπόστολος Κωνσταντίνος)—an equal of the Apostles.
During Constantine's lifetime, Praxagoras of Athens and Libanius, pagan authors, showered Constantine with praise, presenting him as a paragon of virtue. His nephew and son-in-law Julian the Apostate, however, wrote the satire Symposium, or the Saturnalia in 361, after the last of his sons died; it denigrated Constantine, calling him inferior to the great pagan emperors, and given over to luxury and greed. Following Julian, Eunapius began – and Zosimus continued – a historiographic tradition that blamed Constantine for weakening the empire through his indulgence to the Christians.
During the Middle Ages, European and Near-East Byzantine writers presented Constantine as an ideal ruler, the standard against which any king or emperor could be measured. The Renaissance rediscovery of anti-Constantinian sources prompted a re-evaluation of his career. German humanist Johannes Leunclavius discovered Zosimus' writings and published a Latin translation in 1576. In its preface, he argues that Zosimus' picture of Constantine offered a more balanced view than that of Eusebius and the Church historians. Cardinal Caesar Baronius criticised Zosimus, favouring Eusebius' account of the Constantinian era. Baronius' Life of Constantine (1588) presents Constantine as the model of a Christian prince. Edward Gibbon aimed to unite the two extremes of Constantinian scholarship in his work The History of the Decline and Fall of the Roman Empire (1776–89) by contrasting the portraits presented by Eusebius and Zosimus. He presents a noble war hero who transforms into an Oriental despot in his old age, "degenerating into a cruel and dissolute monarch".
Modern interpretations of Constantine's rule begin with Jacob Burckhardt's The Age of Constantine the Great (1853, rev. 1880). Burckhardt's Constantine is a scheming secularist, a politician who manipulates all parties in a quest to secure his own power. Henri Grégoire followed Burckhardt's evaluation of Constantine in the 1930s, suggesting that Constantine developed an interest in Christianity only after witnessing its political usefulness. Grégoire was skeptical of the authenticity of Eusebius' Vita, and postulated a pseudo-Eusebius to assume responsibility for the vision and conversion narratives of that work. Otto Seeck's Geschichte des Untergangs der antiken Welt (1920–23) and André Piganiol's L'empereur Constantin (1932) go against this historiographic tradition. Seeck presents Constantine as a sincere war hero whose ambiguities were the product of his own naïve inconsistency. Piganiol's Constantine is a philosophical monotheist, a child of his era's religious syncretism. Related histories by Arnold Hugh Martin Jones (Constantine and the Conversion of Europe, 1949) and Ramsay MacMullen (Constantine, 1969) give portraits of a less visionary and more impulsive Constantine.
These later accounts were more willing to present Constantine as a genuine convert to Christianity. Norman H. Baynes began a historiographic tradition with Constantine the Great and the Christian Church (1929) which presents Constantine as a committed Christian, reinforced by Andreas Alföldi's The Conversion of Constantine and Pagan Rome (1948), and Timothy Barnes's Constantine and Eusebius (1981) is the culmination of this trend. Barnes' Constantine experienced a radical conversion which drove him on a personal crusade to convert his empire. Charles Matson Odahl's Constantine and the Christian Empire (2004) takes much the same tack. In spite of Barnes' work, arguments continue over the strength and depth of Constantine's religious conversion. Certain themes in this school reached new extremes in T.G. Elliott's The Christianity of Constantine the Great (1996), which presented Constantine as a committed Christian from early childhood. Paul Veyne's 2007 work Quand notre monde est devenu chrétien holds a similar view which does not speculate on the origin of Constantine's Christian motivation, but presents him as a religious revolutionary who fervently believed that he was meant "to play a providential role in the millenary economy of the salvation of humanity".
Latin Christians considered it inappropriate that Constantine was baptised only on his death bed by an unorthodox bishop, and a legend emerged by the early 4th century that Pope Sylvester I had cured the pagan emperor from leprosy. According to this legend, Constantine was baptised and began the construction of a church in the Lateran Basilica. The Donation of Constantine appeared in the 8th century, most likely during the pontificate of Pope Stephen II, in which the freshly converted Constantine gives "the city of Rome and all the provinces, districts, and cities of Italy and the Western regions" to Sylvester and his successors. In the High Middle Ages, this document was used and accepted as the basis for the pope's temporal power, though it was denounced as a forgery by Emperor Otto III and lamented as the root of papal worldliness by Dante Alighieri. Philologist and Catholic priest Lorenzo Valla proved in 1440 that the document was indeed a forgery.
During the medieval period, Britons regarded Constantine as a king of their own people, particularly associating him with Caernarfon in Gwynedd. While some of this is owed to his fame and his proclamation as emperor in Britain, there was also confusion of his family with Magnus Maximus's supposed wife Elen and her son, another Constantine (Welsh: Custennin). In the 12th century Henry of Huntingdon included a passage in his Historia Anglorum that the Emperor Constantine's mother was a Briton, making her the daughter of King Cole of Colchester. Geoffrey of Monmouth expanded this story in his highly fictionalised Historia Regum Britanniae, an account of the supposed Kings of Britain from their Trojan origins to the Anglo-Saxon invasion. According to Geoffrey, Cole was King of the Britons when Constantius, here a senator, came to Britain. Afraid of the Romans, Cole submits to Roman law so long as he retains his kingship. However, he dies only a month later, and Constantius takes the throne himself, marrying Cole's daughter Helena. They have their son Constantine, who succeeds his father as King of Britain before becoming Roman emperor.
Historically, this series of events is extremely improbable. Constantius had already left Helena by the time he left for Britain. Additionally, no earlier source mentions that Helena was born in Britain, let alone that she was a princess. Henry's source for the story is unknown, though it may have been a lost hagiography of Helena. | [
{
"paragraph_id": 0,
"text": "Constantine I (27 February c. 272 – 22 May 337), also known as Constantine the Great, was a Roman emperor from AD 306 to 337. He was also the first emperor to convert to Christianity. Born in Naissus, Dacia Mediterranea (now Niš, Serbia), he was the son of Flavius Constantius, a Roman army officer of Illyrian origin who had been one of the four rulers of the Tetrarchy. His mother, Helena, was a Greek woman of low birth and a Christian. Later canonised as a saint, she is traditionally attributed to the conversion of her son. Constantine served with distinction under the Roman emperors Diocletian and Galerius. He began his career by campaigning in the eastern provinces (against the Persians) before being recalled in the west (in AD 305) to fight alongside his father in the province of Britannia. After his father's death in 306, Constantine was acclaimed as augustus (emperor) by his army at Eboracum (York, England). He eventually emerged victorious in the civil wars against emperors Maxentius and Licinius to become the sole ruler of the Roman Empire by 324.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Upon his ascension, Constantine enacted numerous reforms to strengthen the empire. He restructured the government, separating civil and military authorities. To combat inflation, he introduced the solidus, a new gold coin that became the standard for Byzantine and European currencies for more than a thousand years. The Roman army was reorganised to consist of mobile units (comitatenses), often around the Emperor, to serve on campaigns against external enemies or Roman rebels, and frontier-garrison troops (limitanei) which were capable of countering barbarian raids, but less and less capable, over time, of countering full-scale barbarian invasions. Constantine pursued successful campaigns against the tribes on the Roman frontiers—such as the Franks, the Alemanni, the Goths, and the Sarmatians—and resettled territories abandoned by his predecessors during the Crisis of the Third Century with citizens of Roman culture.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Although Constantine lived much of his life as a pagan and later as a catechumen, he began to favour Christianity beginning in 312, finally becoming a Christian and being baptised by either Eusebius of Nicomedia, an Arian bishop, or by Pope Sylvester I, which is maintained by the Catholic Church and the Coptic Orthodox Church. He played an influential role in the proclamation of the Edict of Milan in 313, which declared tolerance for Christianity in the Roman Empire. He convoked the First Council of Nicaea in 325 which produced the statement of Christian belief known as the Nicene Creed. The Church of the Holy Sepulchre was built on his orders at the purported site of Jesus' tomb in Jerusalem and was deemed the holiest place in all of Christendom. The papal claim to temporal power in the High Middle Ages was based on the fabricated Donation of Constantine. He has historically been referred to as the \"First Christian Emperor,\" but while he did favour the Christian Church, some modern scholars debate his beliefs and even his comprehension of Christianity. Nevertheless, he is venerated as a saint in Eastern Christianity, and he did much to push Christianity towards the mainstream of Roman culture.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The age of Constantine marked a distinct epoch in the history of the Roman Empire and a pivotal moment in the transition from classical antiquity to the Middle Ages. He built a new imperial residence in the city of Byzantium and renamed it New Rome, later adopting the name Constantinople after himself, where it was located in modern Istanbul. It subsequently became the capital of the empire for more than a thousand years, the later Eastern Roman Empire often being referred to in English as the Byzantine Empire, a term never used by the Empire, invented by German historian Hieronymus Wolf. His more immediate political legacy was that he replaced Diocletian's Tetrarchy with the de facto principle of dynastic succession by leaving the empire to his sons and other members of the Constantinian dynasty. His reputation flourished during the lifetime of his children and for centuries after his reign. The medieval church held him up as a paragon of virtue, while secular rulers invoked him as a prototype, a point of reference, and the symbol of imperial legitimacy and identity. At the beginning of the Renaissance, there were more critical appraisals of his reign with the rediscovery of anti-Constantinian sources. Trends in modern and recent scholarship have attempted to balance the extremes of previous scholarship.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Constantine was a ruler of major importance and has always been a controversial figure. The fluctuations in his reputation reflect the nature of the ancient sources for his reign. These are abundant and detailed, but they have been strongly influenced by the official propaganda of the period and are often one-sided; no contemporaneous histories or biographies dealing with his life and rule have survived. The nearest replacement is Eusebius's Vita Constantini—a mixture of eulogy and hagiography written between 335 and circa 339—that extols Constantine's moral and religious virtues. The Vita creates a contentiously positive image of Constantine, and modern historians have frequently challenged its reliability. The fullest secular life of Constantine is the anonymous Origo Constantini, a work of uncertain date which focuses on military and political events to the neglect of cultural and religious matters.",
"title": "Sources"
},
{
"paragraph_id": 5,
"text": "Lactantius' De mortibus persecutorum, a political Christian pamphlet on the reigns of Diocletian and the Tetrarchy, provides valuable but tendentious detail on Constantine's predecessors and early life. The ecclesiastical histories of Socrates, Sozomen, and Theodoret describe the ecclesiastic disputes of Constantine's later reign. Written during the reign of Theodosius II (r. 402–450), a century after Constantine's reign, these ecclesiastical historians obscure the events and theologies of the Constantinian period through misdirection, misrepresentation, and deliberate obscurity. The contemporary writings of the orthodox Christian Athanasius and the ecclesiastical history of the Arian Philostorgius also survive, though their biases are no less firm.",
"title": "Sources"
},
{
"paragraph_id": 6,
"text": "The epitomes of Aurelius Victor (De Caesaribus), Eutropius (Breviarium), Festus (Breviarium), and the anonymous author of the Epitome de Caesaribus offer compressed secular political and military histories of the period. Although not Christian, the epitomes paint a favourable image of Constantine but omit reference to Constantine's religious policies. The Panegyrici Latini, a collection of panegyrics from the late 3rd and early 4th centuries, provides valuable information on the politics and ideology of the tetrarchic period and the early life of Constantine. Contemporary architecture—such as the Arch of Constantine in Rome and palaces in Gamzigrad and Córdoba—epigraphic remains, and the coinage of the era complement the literary sources.",
"title": "Sources"
},
{
"paragraph_id": 7,
"text": "Constantine was born in Naissus (today Niš, Serbia), part of the Dardania province of Moesia on 27 February, c. AD 272. His father was Flavius Constantius an Illyrian who was born in the same region (then called Dacia Ripensis) and a native of the province of Moesia. His original full name, as well as that of his father, is not known. His praenomen is variously given as Lucius, Marcus and Gaius. Whatever the case, praenomina had already disappeared from most public records by this time. He also adopted the name \"Valerius\", the nomen of emperor Diocletian, following his father's ascension as caesar.",
"title": "Early life"
},
{
"paragraph_id": 8,
"text": "Constantine probably spent little time with his father who was an officer in the Roman army, part of Emperor Aurelian's imperial bodyguard. Being described as a tolerant and politically skilled man, Constantius advanced through the ranks, earning the governorship of Dalmatia from Emperor Diocletian, another of Aurelian's companions from Illyricum, in 284 or 285. Constantine's mother was Helena, a Greek woman of low social standing from Helenopolis of Bithynia. It is uncertain whether she was legally married to Constantius or merely his concubine. His main language was Latin, and during his public speeches he needed Greek translators.",
"title": "Early life"
},
{
"paragraph_id": 9,
"text": "In July 285, Diocletian declared Maximian, another colleague from Illyricum, his co-emperor. Each emperor would have his own court, his own military and administrative faculties, and each would rule with a separate praetorian prefect as chief lieutenant. Maximian ruled in the West, from his capitals at Mediolanum (Milan, Italy) or Augusta Treverorum (Trier, Germany), while Diocletian ruled in the East, from Nicomedia (İzmit, Turkey). The division was merely pragmatic: the empire was called \"indivisible\" in official panegyric, and both emperors could move freely throughout the empire. In 288, Maximian appointed Constantius to serve as his praetorian prefect in Gaul. Constantius left Helena to marry Maximian's stepdaughter Theodora in 288 or 289.",
"title": "Early life"
},
{
"paragraph_id": 10,
"text": "Diocletian divided the empire again in 293, appointing two caesars to rule over further subdivisions of East and West. Each would be subordinate to his respective augustus but would act with supreme authority in his assigned lands. This system would later be called the Tetrarchy. Diocletian's first appointee for the office of Caesar was Constantius; his second was Galerius, a native of Felix Romuliana. According to Lactantius, Galerius was a brutal, animalistic man. Although he shared the paganism of Rome's aristocracy, he seemed to them an alien figure, a semi-barbarian. On 1 March, Constantius was promoted to the office of Caesar, and dispatched to Gaul to fight the rebels Carausius and Allectus. In spite of meritocratic overtones, the Tetrarchy retained vestiges of hereditary privilege, and Constantine became the prime candidate for future appointment as Caesar as soon as his father took the position. Constantine went to the court of Diocletian, where he lived as his father's heir presumptive.",
"title": "Early life"
},
{
"paragraph_id": 11,
"text": "Constantine received a formal education at Diocletian's court, where he learned Latin literature, Greek, and philosophy. The cultural environment in Nicomedia was open, fluid, and socially mobile; in it, Constantine could mix with intellectuals both pagan and Christian. He may have attended the lectures of Lactantius, a Christian scholar of Latin in the city. Because Diocletian did not completely trust Constantius—none of the Tetrarchs fully trusted their colleagues—Constantine was held as something of a hostage, a tool to ensure Constantius' best behavior. Constantine was nonetheless a prominent member of the court: he fought for Diocletian and Galerius in Asia and served in a variety of tribunates; he campaigned against barbarians on the Danube in 296 and fought the Persians under Diocletian in Syria in 297, as well as under Galerius in Mesopotamia in 298–299. By late 305, he had become a tribune of the first order, a tribunus ordinis primi.",
"title": "Early life"
},
{
"paragraph_id": 12,
"text": "Constantine had returned to Nicomedia from the eastern front by the spring of 303, in time to witness the beginnings of Diocletian's \"Great Persecution\", the most severe persecution of Christians in Roman history. In late 302AD, Diocletian and Galerius sent a messenger to the oracle of Apollo at Didyma with an inquiry about Christians. Constantine could recall his presence at the palace when the messenger returned when Diocletian accepted his court's demands for universal persecution. On 23 February 303, Diocletian ordered the destruction of Nicomedia's new church, condemned its scriptures to the flames, and had its treasures seized. In the months that followed, churches and scriptures were destroyed, Christians were deprived of official ranks, and priests were imprisoned. It is unlikely that Constantine played any role in the persecution. In his later writings, he attempted to present himself as an opponent of Diocletian's \"sanguinary edicts\" against the \"Worshippers of God\", but nothing indicates that he opposed it effectively at the time. Although no contemporary Christian challenged Constantine for his inaction during the persecutions, it remained a political liability throughout his life.",
"title": "Early life"
},
{
"paragraph_id": 13,
"text": "On 1 May 305, Diocletian, as a result of a debilitating sickness taken in the winter of 304–305, announced his resignation. In a parallel ceremony in Milan, Maximian did the same. Lactantius states that Galerius manipulated the weakened Diocletian into resigning and forced him to accept Galerius' allies in the imperial succession. According to Lactantius, the crowd listening to Diocletian's resignation speech believed, until the last moment, that Diocletian would choose Constantine and Maxentius (Maximian's son) as his successors. It was not to be: Constantius and Galerius were promoted to augusti, while Severus and Maximinus, Galerius' nephew, were appointed their caesars respectively. Constantine and Maxentius were ignored.",
"title": "Early life"
},
{
"paragraph_id": 14,
"text": "Some of the ancient sources detail plots that Galerius made on Constantine's life in the months following Diocletian's abdication. They assert that Galerius assigned Constantine to lead an advance unit in a cavalry charge through a swamp on the middle Danube, made him enter into single combat with a lion, and attempted to kill him in hunts and wars. Constantine always emerged victorious: the lion emerged from the contest in a poorer condition than Constantine; Constantine returned to Nicomedia from the Danube with a Sarmatian captive to drop at Galerius' feet. It is uncertain how much these tales can be trusted.",
"title": "Early life"
},
{
"paragraph_id": 15,
"text": "Constantine recognised the implicit danger in remaining at Galerius' court, where he was held as a virtual hostage. His career depended on being rescued by his father in the West. Constantius was quick to intervene. In the late spring or early summer of 305, Constantius requested leave for his son to help him campaign in Britain. After a long evening of drinking, Galerius granted the request. Constantine's later propaganda describes how he fled the court in the night, before Galerius could change his mind. He rode from post-house to post-house at high speed, hamstringing every horse in his wake. By the time Galerius awoke the following morning, Constantine had fled too far to be caught. Constantine joined his father in Gaul, at Bononia (Boulogne) before the summer of 305.",
"title": "Early life"
},
{
"paragraph_id": 16,
"text": "From Bononia, they crossed the English Channel to Britain and made their way to Eboracum (York), capital of the province of Britannia Secunda and home to a large military base. Constantine was able to spend a year in northern Britain at his father's side, campaigning against the Picts beyond Hadrian's Wall in the summer and autumn. Constantius' campaign, like that of Septimius Severus before it, probably advanced far into the north without achieving great success. Constantius had become severely sick over the course of his reign and died on 25 July 306 in Eboracum. Before dying, he declared his support for raising Constantine to the rank of full augustus. The Alamannic king Chrocus, a barbarian taken into service under Constantius, then proclaimed Constantine as augustus. The troops loyal to Constantius' memory followed him in acclamation. Gaul and Britain quickly accepted his rule; Hispania, which had been in his father's domain for less than a year, rejected it.",
"title": "Early life"
},
{
"paragraph_id": 17,
"text": "Constantine sent Galerius an official notice of Constantius' death and his own acclamation. Along with the notice, he included a portrait of himself in the robes of an augustus. The portrait was wreathed in bay. He requested recognition as heir to his father's throne and passed off responsibility for his unlawful ascension on his army, claiming they had \"forced it upon him\". Galerius was put into a fury by the message; he almost set the portrait and messenger on fire. His advisers calmed him and argued that outright denial of Constantine's claims would mean certain war. Galerius was compelled to compromise: he granted Constantine the title \"caesar\" rather than \"augustus\" (the latter office went to Severus instead). Wishing to make it clear that he alone gave Constantine legitimacy, Galerius personally sent Constantine the emperor's traditional purple robes. Constantine accepted the decision, knowing that it would remove doubts as to his legitimacy.",
"title": "Early life"
},
{
"paragraph_id": 18,
"text": "Constantine's share of the empire consisted of Britain, Gaul, and Spain, and he commanded one of the largest Roman armies which was stationed along the important Rhine frontier. He remained in Britain after his promotion to emperor, driving back the tribes of the Picts and securing his control in the northwestern dioceses. He completed the reconstruction of military bases begun under his father's rule, and he ordered the repair of the region's roadways. He then left for Augusta Treverorum (Trier) in Gaul, the Tetrarchic capital of the northwestern Roman Empire. The Franks learned of Constantine's acclamation and invaded Gaul across the lower Rhine over the winter of 306–307. He drove them back beyond the Rhine and captured kings Ascaric and Merogais; the kings and their soldiers were fed to the beasts of Trier's amphitheatre in the adventus (arrival) celebrations which followed.",
"title": "Reign"
},
{
"paragraph_id": 19,
"text": "Constantine began a major expansion of Trier. He strengthened the circuit wall around the city with military towers and fortified gates, and he began building a palace complex in the northeastern part of the city. To the south of his palace, he ordered the construction of a large formal audience hall and a massive imperial bathhouse. He sponsored many building projects throughout Gaul during his tenure as emperor of the West, especially in Augustodunum (Autun) and Arelate (Arles). According to Lactantius, Constantine followed a tolerant policy towards Christianity, although he was not yet a Christian. He probably judged it a more sensible policy than open persecution and a way to distinguish himself from the \"great persecutor\" Galerius. He decreed a formal end to persecution and returned to Christians all that they had lost during them.",
"title": "Reign"
},
{
"paragraph_id": 20,
"text": "Constantine was largely untried and had a hint of illegitimacy about him; he relied on his father's reputation in his early propaganda, which gave as much coverage to his father's deeds as to his. His military skill and building projects, however, soon gave the panegyrist the opportunity to comment favourably on the similarities between father and son, and Eusebius remarked that Constantine was a \"renewal, as it were, in his own person, of his father's life and reign\". Constantinian coinage, sculpture, and oratory also show a tendency for disdain towards the \"barbarians\" beyond the frontiers. He minted a coin issue after his victory over the Alemanni which depicts weeping and begging Alemannic tribesmen, \"the Alemanni conquered\" beneath the phrase \"Romans' rejoicing\". There was little sympathy for these enemies; as his panegyrist declared, \"It is a stupid clemency that spares the conquered foe.\"",
"title": "Reign"
},
{
"paragraph_id": 21,
"text": "Following Galerius' recognition of Constantine as caesar, Constantine's portrait was brought to Rome, as was customary. Maxentius mocked the portrait's subject as the son of a harlot and lamented his own powerlessness. Maxentius, envious of Constantine's authority, seized the title of emperor on 28 October 306. Galerius refused to recognize him but failed to unseat him. Galerius sent Severus against Maxentius, but during the campaign, Severus' armies, previously under command of Maxentius' father Maximian, defected, and Severus was seized and imprisoned. Maximian, brought out of retirement by his son's rebellion, left for Gaul to confer with Constantine in late 307. He offered to marry his daughter Fausta to Constantine and elevate him to augustan rank. In return, Constantine would reaffirm the old family alliance between Maximian and Constantius and offer support to Maxentius' cause in Italy. Constantine accepted and married Fausta in Trier in late summer 307. Constantine gave Maxentius his meagre support, offering Maxentius political recognition.",
"title": "Reign"
},
{
"paragraph_id": 22,
"text": "Constantine remained aloof from the Italian conflict, however. Over the spring and summer of 307, he had left Gaul for Britain to avoid any involvement in the Italian turmoil; now, instead of giving Maxentius military aid, he sent his troops against Germanic tribes along the Rhine. In 308, he raided the territory of the Bructeri and made a bridge across the Rhine at Colonia Agrippinensium (Cologne). In 310, he marched to the northern Rhine and fought the Franks. When not campaigning, he toured his lands advertising his benevolence and supporting the economy and the arts. His refusal to participate in the war increased his popularity among his people and strengthened his power base in the West. Maximian returned to Rome in the winter of 307–308 but soon fell out with his son. In early 308, after a failed attempt to usurp Maxentius' title, Maximian returned to Constantine's court.",
"title": "Reign"
},
{
"paragraph_id": 23,
"text": "On 11 November 308, Galerius called a general council at the military city of Carnuntum (Petronell-Carnuntum, Austria) to resolve the instability in the western provinces. In attendance were Diocletian, briefly returned from retirement, Galerius, and Maximian. Maximian was forced to abdicate again and Constantine was again demoted to caesar. Licinius, one of Galerius' old military companions, was appointed augustus in the western regions. The new system did not last long: Constantine refused to accept the demotion and continued to style himself as augustus on his coinage, even as other members of the Tetrarchy referred to him as a caesar on theirs. Maximinus was frustrated that he had been passed over for promotion while the newcomer Licinius had been raised to the office of augustus and demanded that Galerius promote him. Galerius offered to call both Maximinus and Constantine \"sons of the augusti\", but neither accepted the new title. By the spring of 310, Galerius was referring to both men as augusti.",
"title": "Reign"
},
{
"paragraph_id": 24,
"text": "In 310, a dispossessed Maximian rebelled against Constantine while Constantine was away campaigning against the Franks. Maximian had been sent south to Arles with a contingent of Constantine's army, in preparation for any attacks by Maxentius in southern Gaul. He announced that Constantine was dead and took up the imperial purple. In spite of a large donative pledge to any who would support him as emperor, most of Constantine's army remained loyal to their emperor, and Maximian was soon compelled to leave. When Constantine heard of the rebellion, he abandoned his campaign against the Franks and marched his army up the Rhine. At Cabillunum (Chalon-sur-Saône), he moved his troops onto waiting boats to row down the slow waters of the Saône to the quicker waters of the Rhone. He disembarked at Lugdunum (Lyon). Maximian fled to Massilia (Marseille), a town better able to withstand a long siege than Arles. It made little difference, however, as loyal citizens opened the rear gates to Constantine. Maximian was captured and reproved for his crimes. Constantine granted some clemency but strongly encouraged his suicide. In July 310, Maximian hanged himself.",
"title": "Reign"
},
{
"paragraph_id": 25,
"text": "In spite of the earlier rupture in their relations, Maxentius was eager to present himself as his father's devoted son after his death. He began minting coins with his father's deified image, proclaiming his desire to avenge Maximian's death. Constantine initially presented the suicide as an unfortunate family tragedy. By 311, however, he was spreading another version. According to this, after Constantine had pardoned him, Maximian planned to murder Constantine in his sleep. Fausta learned of the plot and warned Constantine, who put a eunuch in his own place in bed. Maximian was apprehended when he killed the eunuch and was offered suicide, which he accepted. Along with using propaganda, Constantine instituted a damnatio memoriae on Maximian, destroying all inscriptions referring to him and eliminating any public work bearing his image.",
"title": "Reign"
},
{
"paragraph_id": 26,
"text": "The death of Maximian required a shift in Constantine's public image. He could no longer rely on his connection to the elder Emperor Maximian and needed a new source of legitimacy. In a speech delivered in Gaul on 25 July 310, the anonymous orator reveals a previously unknown dynastic connection to Claudius II, a 3rd-century emperor famed for defeating the Goths and restoring order to the empire. Breaking away from tetrarchic models, the speech emphasizes Constantine's ancestral prerogative to rule, rather than principles of imperial equality. The new ideology expressed in the speech made Galerius and Maximian irrelevant to Constantine's right to rule. Indeed, the orator emphasizes ancestry to the exclusion of all other factors: \"No chance agreement of men, nor some unexpected consequence of favour, made you emperor,\" the orator declares to Constantine.",
"title": "Reign"
},
{
"paragraph_id": 27,
"text": "The oration also moves away from the religious ideology of the Tetrarchy, with its focus on twin dynasties of Jupiter and Hercules. Instead, the orator proclaims that Constantine experienced a divine vision of Apollo and Victory granting him laurel wreaths of health and a long reign. In the likeness of Apollo, Constantine recognised himself as the saving figure to whom would be granted \"rule of the whole world\", as the poet Virgil had once foretold. The oration's religious shift is paralleled by a similar shift in Constantine's coinage. In his early reign, the coinage of Constantine advertised Mars as his patron. From 310 on, Mars was replaced by Sol Invictus, a god conventionally identified with Apollo. There is little reason to believe that either the dynastic connection or the divine vision are anything other than fiction, but their proclamation strengthened Constantine's claims to legitimacy and increased his popularity among the citizens of Gaul.",
"title": "Reign"
},
{
"paragraph_id": 28,
"text": "By the middle of 310, Galerius had become too ill to involve himself in imperial politics. His final act survives: a letter to provincials posted in Nicomedia on 30 April 311, proclaiming an end to the persecutions, and the resumption of religious toleration.",
"title": "Reign"
},
{
"paragraph_id": 29,
"text": "Eusebius maintains \"divine providence […] took action against the perpetrator of these crimes\" and gives a graphic account of Galerius' demise:",
"title": "Reign"
},
{
"paragraph_id": 30,
"text": "\"Without warning suppurative inflammation broke out round the middle of his genitals, then a deep-seated fistula ulcer; these ate their way incurably into his innermost bowels. From them came a teeming indescribable mass of worms, and a sickening smell was given off, for the whole of his hulking body, thanks to over eating, had been transformed even before his illness into a huge lump of flabby fat, which then decomposed and presented those who came near it with a revolting and horrifying sight.\"",
"title": "Reign"
},
{
"paragraph_id": 31,
"text": "Galerius died soon after the edict's proclamation, destroying what little remained of the Tetrarchy. Maximinus mobilised against Licinius and seized Asia Minor. A hasty peace was signed on a boat in the middle of the Bosphorus. While Constantine toured Britain and Gaul, Maxentius prepared for war. He fortified northern Italy and strengthened his support in the Christian community by allowing it to elect Eusebius as bishop of Rome.",
"title": "Reign"
},
{
"paragraph_id": 32,
"text": "Maxentius' rule was nevertheless insecure. His early support dissolved in the wake of heightened tax rates and depressed trade; riots broke out in Rome and Carthage; and Domitius Alexander was able to briefly usurp his authority in Africa. By 312, he was a man barely tolerated, not one actively supported, even among Christian Italians. In the summer of 311, Maxentius mobilised against Constantine while Licinius was occupied with affairs in the East. He declared war on Constantine, vowing to avenge his father's \"murder\". To prevent Maxentius from forming an alliance against him with Licinius, Constantine forged his own alliance with Licinius over the winter of 311–312 and offered him his sister Constantia in marriage. Maximinus considered Constantine's arrangement with Licinius an affront to his authority. In response, he sent ambassadors to Rome, offering political recognition to Maxentius in exchange for a military support, which Maxentius accepted. According to Eusebius, inter-regional travel became impossible, and there was military buildup everywhere. There was \"not a place where people were not expecting the onset of hostilities every day\".",
"title": "Reign"
},
{
"paragraph_id": 33,
"text": "Constantine's advisers and generals cautioned against preemptive attack on Maxentius; even his soothsayers recommended against it, stating that the sacrifices had produced unfavourable omens. Constantine, with a spirit that left a deep impression on his followers, inspiring some to believe that he had some form of supernatural guidance, ignored all these cautions. Early in the spring of 312, Constantine crossed the Cottian Alps with a quarter of his army, a force numbering about 40,000. The first town his army encountered was Segusium (Susa, Italy), a heavily fortified town that shut its gates to him. Constantine ordered his men to set fire to its gates and scale its walls. He took the town quickly. Constantine ordered his troops not to loot the town and advanced into northern Italy.",
"title": "Reign"
},
{
"paragraph_id": 34,
"text": "At the approach to the west of the important city of Augusta Taurinorum (Turin, Italy), Constantine met a large force of heavily armed Maxentian cavalry. In the ensuing Battle of Turin Constantine's army encircled Maxentius' cavalry, flanked them with his own cavalry, and dismounted them with blows from his soldiers' iron-tipped clubs. Constantine's armies emerged victorious. Turin refused to give refuge to Maxentius' retreating forces, opening its gates to Constantine instead. Other cities of the north Italian plain sent Constantine embassies of congratulation for his victory. He moved on to Milan, where he was met with open gates and jubilant rejoicing. Constantine rested his army in Milan until mid-summer 312, when he moved on to Brixia (Brescia).",
"title": "Reign"
},
{
"paragraph_id": 35,
"text": "Brescia's army was easily dispersed, and Constantine quickly advanced to Verona where a large Maxentian force was camped. Ruricius Pompeianus, general of the Veronese forces and Maxentius' praetorian prefect, was in a strong defensive position since the town was surrounded on three sides by the Adige. Constantine sent a small force north of the town in an attempt to cross the river unnoticed. Ruricius sent a large detachment to counter Constantine's expeditionary force but was defeated. Constantine's forces successfully surrounded the town and laid siege. Ruricius gave Constantine the slip and returned with a larger force to oppose Constantine. Constantine refused to let up on the siege and sent only a small force to oppose him. In the desperately fought encounter that followed, Ruricius was killed and his army destroyed. Verona surrendered soon afterwards, followed by Aquileia, Mutina (Modena), and Ravenna. The road to Rome was now wide open to Constantine.",
"title": "Reign"
},
{
"paragraph_id": 36,
"text": "Maxentius prepared for the same type of war he had waged against Severus and Galerius: he sat in Rome and prepared for a siege. He still controlled Rome's Praetorian Guard, was well-stocked with African grain, and was surrounded on all sides by the seemingly impregnable Aurelian Walls. He ordered all bridges across the Tiber cut, reportedly on the counsel of the gods, and left the rest of central Italy undefended; Constantine secured that region's support without challenge. Constantine progressed slowly along the Via Flaminia, allowing the weakness of Maxentius to draw his regime further into turmoil. Maxentius' support continued to weaken: at chariot races on 27 October, the crowd openly taunted Maxentius, shouting that Constantine was invincible. Maxentius, no longer certain that he would emerge from a siege victorious, built a temporary boat bridge across the Tiber in preparation for a field battle against Constantine. On 28 October 312, the sixth anniversary of his reign, he approached the keepers of the Sibylline Books for guidance. The keepers prophesied that, on that very day, \"the enemy of the Romans\" would die. Maxentius advanced north to meet Constantine in battle.",
"title": "Reign"
},
{
"paragraph_id": 37,
"text": "Maxentius' forces were still twice the size of Constantine's, and he organised them in long lines facing the battle plain with their backs to the river. Constantine's army arrived on the field bearing unfamiliar symbols on their standards and their shields. According to Lactantius \"Constantine was directed in a dream to cause the heavenly sign to be delineated on the shields of his soldiers, and so to proceed to battle. He did as he had been commanded, and he marked on their shields the letter Χ, with a perpendicular line drawn through it and turned round thus at the top, being the cipher of Christ. Having this sign (☧), his troops stood to arms.\" Eusebius describes a vision that Constantine had while marching at midday in which \"he saw with his own eyes the trophy of a cross of light in the heavens, above the sun, and bearing the inscription, In Hoc Signo Vinces\" (\"In this sign thou shalt conquer\"). In Eusebius's account, Constantine had a dream the following night in which Christ appeared with the same heavenly sign and told him to make an army standard in the form of the labarum. Eusebius is vague about when and where these events took place, but it enters his narrative before the war begins against Maxentius. He describes the sign as Chi (Χ) traversed by Rho (Ρ) to form ☧, representing the first two letters of the Greek word ΧΡΙΣΤΟΣ (Christos). A medallion was issued at Ticinum in 315 which shows Constantine wearing a helmet emblazoned with the Chi Rho, and coins issued at Siscia in 317/318 repeat the image. The figure was otherwise rare and is uncommon in imperial iconography and propaganda before the 320s. It was not completely unknown, however, being an abbreviation of the Greek word chrēston (good), having previously appeared on the coins of Ptolemy III Euergetes in the 3rd century BC. Following Constantine, centuries of Christians invoked the miraculous or the supernatural when justifying or describing their warfare.",
"title": "Reign"
},
{
"paragraph_id": 38,
"text": "Constantine deployed his own forces along the whole length of Maxentius' line. He ordered his cavalry to charge, and they broke Maxentius' cavalry. He then sent his infantry against Maxentius' infantry, pushing many into the Tiber where they were slaughtered and drowned. The battle was brief, and Maxentius' troops were broken before the first charge. His horse guards and praetorians initially held their position, but they broke under the force of a Constantinian cavalry charge; they also broke ranks and fled to the river. Maxentius rode with them and attempted to cross the bridge of boats (Ponte Milvio), but he was pushed into the Tiber and drowned by the mass of his fleeing soldiers.",
"title": "Reign"
},
{
"paragraph_id": 39,
"text": "Constantine entered Rome on 29 October 312 and staged a grand adventus in the city which was met with jubilation. Maxentius' body was fished out of the Tiber and decapitated, and his head was paraded through the streets for all to see. After the ceremonies, the disembodied head was sent to Carthage, and Carthage offered no further resistance. Unlike his predecessors, Constantine neglected to make the trip to the Capitoline Hill and perform customary sacrifices at the Temple of Jupiter. However, he did visit the Senatorial Curia Julia, and he promised to restore its ancestral privileges and give it a secure role in his reformed government; there would be no revenge against Maxentius' supporters. In response, the Senate decreed him \"title of the first name\", which meant that his name would be listed first in all official documents, and they acclaimed him as \"the greatest augustus\". He issued decrees returning property that was lost under Maxentius, recalling political exiles, and releasing Maxentius' imprisoned opponents.",
"title": "Reign"
},
{
"paragraph_id": 40,
"text": "An extensive propaganda campaign followed, during which Maxentius' image was purged from all public places. He was written up as a \"tyrant\" and set against an idealised image of Constantine the \"liberator\". Eusebius is the best representative of this strand of Constantinian propaganda. Maxentius' rescripts were declared invalid, and the honours that he had granted to leaders of the Senate were also invalidated. Constantine also attempted to remove Maxentius' influence on Rome's urban landscape. All structures built by him were rededicated to Constantine, including the Temple of Romulus and the Basilica of Maxentius. At the focal point of the basilica, a stone statue was erected of Constantine holding the Christian labarum in its hand. Its inscription bore the message which the statue illustrated: \"By this sign, Constantine had freed Rome from the yoke of the tyrant.\"",
"title": "Reign"
},
{
"paragraph_id": 41,
"text": "Constantine also sought to upstage Maxentius' achievements. For example, the Circus Maximus was redeveloped so that its seating capacity was 25 times larger than that of Maxentius' racing complex on the Via Appia. Maxentius' strongest military supporters were neutralised when he disbanded the Praetorian Guard and Imperial Horse Guard. The tombstones of the Imperial Horse Guard were ground up and used in a basilica on the Via Labicana, and their former base was redeveloped into the Lateran Basilica on 9 November 312—barely two weeks after Constantine captured the city. The Legio II Parthica was removed from Albano Laziale, and the remainder of Maxentius' armies were sent to do frontier duty on the Rhine.",
"title": "Reign"
},
{
"paragraph_id": 42,
"text": "In the following years, Constantine gradually consolidated his military superiority over his rivals in the crumbling Tetrarchy. In 313, he met Licinius in Milan to secure their alliance by the marriage of Licinius and Constantine's half-sister Constantia. During this meeting, the emperors agreed on the so-called Edict of Milan, officially granting full tolerance to Christianity and all religions in the empire. The document had special benefits for Christians, legalizing their religion and granting them restoration for all property seized during Diocletian's persecution. It repudiates past methods of religious coercion and used only general terms to refer to the divine sphere—\"Divinity\" and \"Supreme Divinity\", summa divinitas. The conference was cut short, however, when news reached Licinius that his rival Maximinus had crossed the Bosporus and invaded European territory. Licinius departed and eventually defeated Maximinus, gaining control over the entire eastern half of the Roman Empire. Relations between the two remaining emperors deteriorated, as Constantine suffered an assassination attempt at the hands of a character that Licinius wanted elevated to the rank of Caesar; Licinius, for his part, had Constantine's statues in Emona destroyed. In either 314 or 316, the two augusti fought against one another at the Battle of Cibalae, with Constantine being victorious. They clashed again at the Battle of Mardia in 317 and agreed to a settlement in which Constantine's sons Crispus and Constantine II, and Licinius' son Licinianus were made caesars. After this arrangement, Constantine ruled the dioceses of Pannonia and Macedonia and took residence at Sirmium, whence he could wage war on the Goths and Sarmatians in 322, and on the Goths in 323, defeating and killing their leader Rausimod.",
"title": "Reign"
},
{
"paragraph_id": 43,
"text": "In 320, Licinius allegedly reneged on the religious freedom promised by the Edict of Milan and began to oppress Christians anew, generally without bloodshed, but resorting to confiscations and sacking of Christian office-holders. Although this characterization of Licinius as anti-Christian is somewhat doubtful, the fact is that he seems to have been far less open in his support of Christianity than Constantine. Therefore, Licinius was prone to see the Church as a force more loyal to Constantine than to the Imperial system in general, as the explanation offered by the Church historian Sozomen.",
"title": "Reign"
},
{
"paragraph_id": 44,
"text": "This dubious arrangement eventually became a challenge to Constantine in the West, climaxing in the great civil war of 324. Constantine's Christian eulogists present the war as a battle between Christianity and paganism; Licinius, aided by Gothic mercenaries, represented the past and ancient paganism, while Constantine and his Franks marched under the standard of the labarum. Outnumbered but fired by their zeal, Constantine's army emerged victorious in the Battle of Adrianople. Licinius fled across the Bosphorus and appointed Martinian, his magister officiorum, as nominal augustus in the West, but Constantine next won the Battle of the Hellespont and finally the Battle of Chrysopolis on 18 September 324. Licinius and Martinian surrendered to Constantine at Nicomedia on the promise their lives would be spared: they were sent to live as private citizens in Thessalonica and Cappadocia respectively, but in 325 Constantine accused Licinius of plotting against him and had them both arrested and hanged; Licinius' son (the son of Constantine's half-sister) was killed in 326. Thus Constantine became the sole emperor of the Roman Empire.",
"title": "Reign"
},
{
"paragraph_id": 45,
"text": "Diocletian had chosen Nicomedia in the East as his capital during the Tetrarchy—not far from Byzantium, well situated to defend Thrace, Asia, and Egypt, all of which had required his military attention. Constantine had recognised the shift of the empire from the remote and depopulated West to the richer cities of the East, and the military strategic importance of protecting the Danube from barbarian excursions and Asia from a hostile Persia in choosing his new capital as well as being able to monitor shipping traffic between the Black Sea and the Mediterranean. Licinius' defeat came to represent the defeat of a rival centre of pagan and Greek-speaking political activity in the East, as opposed to the Christian and Latin-speaking Rome, and it was proposed that a new Eastern capital should represent the integration of the East into the Roman Empire as a whole, as a centre of learning, prosperity, and cultural preservation for the whole of the Eastern Roman Empire. Among the various locations proposed for this alternative capital, Constantine appears to have toyed earlier with Serdica (present-day Sofia), as he was reported saying that \"Serdica is my Rome\". Sirmium and Thessalonica were also considered. Eventually, however, Constantine decided to work on the Greek city of Byzantium, which offered the advantage of having already been extensively rebuilt on Roman patterns of urbanism during the preceding century by Septimius Severus and Caracalla, who had already acknowledged its strategic importance. The city was thus founded in 324, dedicated on 11 May 330 and renamed Constantinopolis (\"Constantine's City\" or Constantinople in English). Special commemorative coins were issued in 330 to honor the event. The new city was protected by the relics of the True Cross, the Rod of Moses and other holy relics, though a cameo now at the Hermitage Museum also represented Constantine crowned by the tyche of the new city. The figures of old gods were either replaced or assimilated into a framework of Christian symbolism. Constantine built the new Church of the Holy Apostles on the site of a temple to Aphrodite. Generations later there was the story that a divine vision led Constantine to this spot, and an angel no one else could see led him on a circuit of the new walls. The capital would often be compared to the 'old' Rome as Nova Roma Constantinopolitana, the \"New Rome of Constantinople\".",
"title": "Reign"
},
{
"paragraph_id": 46,
"text": "Constantine was the first emperor to stop the persecution of Christians and to legalize Christianity, along with all other religions/cults in the Roman Empire. In February 313, he met with Licinius in Milan and developed the Edict of Milan, which stated that Christians should be allowed to follow their faith without oppression. This removed penalties for professing Christianity, under which many had been martyred previously, and it returned confiscated Church property. The edict protected all religions from persecution, not only Christianity, allowing anyone to worship any deity that they chose. A similar edict had been issued in 311 by Galerius, senior emperor of the Tetrarchy, which granted Christians the right to practise their religion but did not restore any property to them. The Edict of Milan included several clauses which stated that all confiscated churches would be returned, as well as other provisions for previously persecuted Christians. Scholars debate whether Constantine adopted his mother Helena's Christianity in his youth or whether he adopted it gradually over the course of his life.",
"title": "Reign"
},
{
"paragraph_id": 47,
"text": "Constantine possibly retained the title of pontifex maximus which emperors bore as heads of the ancient Roman religion until Gratian renounced the title. According to Christian writers, Constantine was over 40 when he finally declared himself a Christian, making it clear that he owed his successes to the protection of the Christian High God alone. Despite these declarations of being a Christian, he waited to be baptised on his deathbed, believing that the baptism would release him of any sins he committed in the course of carrying out his policies while emperor. He supported the Church financially, built basilicas, granted privileges to clergy (such as exemption from certain taxes), promoted Christians to high office, and returned property confiscated during the long period of persecution. His most famous building projects include the Church of the Holy Sepulchre and Old St. Peter's Basilica. In constructing the Old St. Peter's Basilica, Constantine went to great lengths to erect the basilica on top of St. Peter's resting place, so much so that it even affected the design of the basilica, including the challenge of erecting it on the hill where St. Peter rested, making its complete construction time over 30 years from the date Constantine ordered it to be built.",
"title": "Reign"
},
{
"paragraph_id": 48,
"text": "Constantine might not have patronised Christianity alone. A triumphal arch was built in 315 to celebrate his victory in the Battle of the Milvian Bridge which was decorated with images of the goddess Victoria, and sacrifices were made to pagan gods at its dedication, including Apollo, Diana, and Hercules. Absent from the arch are any depictions of Christian symbolism. However, the arch was commissioned by the Senate, so the absence of Christian symbols may reflect the role of the Curia at the time as a pagan redoubt.",
"title": "Reign"
},
{
"paragraph_id": 49,
"text": "In 321, he legislated that the venerable Sunday should be a day of rest for all citizens. In 323, he issued a decree banning Christians from participating in state sacrifices. After the pagan gods had disappeared from his coinage, Christian symbols appeared as Constantine's attributes, the chi rho between his hands or on his labarum, as well on the coinage. The reign of Constantine established a precedent for the emperor to have great influence and authority in the early Christian councils, most notably the dispute over Arianism. Constantine disliked the risks to societal stability that religious disputes and controversies brought with them, preferring to establish an orthodoxy. His influence over the Church councils was to enforce doctrine, root out heresy, and uphold ecclesiastical unity; the Church's role was to determine proper worship, doctrines, and dogma.",
"title": "Reign"
},
{
"paragraph_id": 50,
"text": "North African bishops struggled with Christian bishops who had been ordained by Donatus in opposition to Caecilian from 313 to 316. The African bishops could not come to terms, and the Donatists asked Constantine to act as a judge in the dispute. Three regional Church councils and another trial before Constantine all ruled against Donatus and the Donatism movement in North Africa. In 317, Constantine issued an edict to confiscate Donatist church property and to send Donatist clergy into exile. More significantly, in 325 he summoned the First Council of Nicaea, most known for its dealing with Arianism and for instituting the Nicene Creed. He enforced the council's prohibition against celebrating the Lord's Supper on the day before the Jewish Passover, which marked a definite break of Christianity from the Judaic tradition. From then on, the solar Julian calendar was given precedence over the lunisolar Hebrew calendar among the Christian churches of the Roman Empire.",
"title": "Reign"
},
{
"paragraph_id": 51,
"text": "Constantine made some new laws regarding the Jews; some of them were unfavourable towards Jews, although they were not harsher than those of his predecessors. It was made illegal for Jews to seek converts or to attack other Jews who had converted to Christianity. They were forbidden to own Christian slaves or to circumcise their slaves. On the other hand, Jewish clergy were given the same exemptions as Christian clergy.",
"title": "Reign"
},
{
"paragraph_id": 52,
"text": "Beginning in the mid-3rd century, the emperors began to favour members of the equestrian order over senators, who had a monopoly on the most important offices of the state. Senators were stripped of the command of legions and most provincial governorships, as it was felt that they lacked the specialised military upbringing needed in an age of acute defense needs; such posts were given to equestrians by Diocletian and his colleagues, following a practice enforced piecemeal by their predecessors. The emperors, however, still needed the talents and the help of the very rich, who were relied on to maintain social order and cohesion by means of a web of powerful influence and contacts at all levels. Exclusion of the old senatorial aristocracy threatened this arrangement.",
"title": "Reign"
},
{
"paragraph_id": 53,
"text": "In 326, Constantine reversed this pro-equestrian trend, raising many administrative positions to senatorial rank and thus opening these offices to the old aristocracy; at the same time, he elevated the rank of existing equestrian office-holders to senator, degrading the equestrian order in the process (at least as a bureaucratic rank). The title of perfectissimus was granted only to mid- or low-level officials by the end of the 4th century.",
"title": "Reign"
},
{
"paragraph_id": 54,
"text": "By the new Constantinian arrangement, one could become a senator by being elected praetor or by fulfilling a function of senatorial rank. From then on, holding actual power and social status were melded together into a joint imperial hierarchy. Constantine gained the support of the old nobility with this, as the Senate was allowed to elect praetors and quaestors in place of the usual practice of the emperors directly creating magistrates (adlectio). An inscription in honor of city prefect Ceionius Rufus Albinus states that Constantine had restored the Senate \"the auctoritas it had lost at Caesar's time\".",
"title": "Reign"
},
{
"paragraph_id": 55,
"text": "The Senate as a body remained devoid of any significant power; nevertheless, the senators had been marginalised as potential holders of imperial functions during the 3rd century but could dispute such positions alongside more upstart bureaucrats. Some modern historians see in those administrative reforms an attempt by Constantine at reintegrating the senatorial order into the imperial administrative elite to counter the possibility of alienating pagan senators from a Christianised imperial rule; however, such an interpretation remains conjectural, given the fact that we do not have the precise numbers about pre-Constantine conversions to Christianity in the old senatorial milieu. Some historians suggest that early conversions among the old aristocracy were more numerous than previously supposed.",
"title": "Reign"
},
{
"paragraph_id": 56,
"text": "Constantine's reforms had to do only with the civilian administration. The military chiefs had risen from the ranks since the Crisis of the Third Century but remained outside the Senate, in which they were included only by Constantine's children.",
"title": "Reign"
},
{
"paragraph_id": 57,
"text": "In the 3rd century, the production of fiat money to pay for public expenses resulted in runaway inflation, and Diocletian tried unsuccessfully to re-establish trustworthy minting of silver coins, as well as silver-bronze \"billon\" coins (the term \"billon\" meaning an alloy of precious and base metals that is mostly base metal). Silver currency was overvalued in terms of its actual metal content and therefore could only circulate at much discounted rates. Constantine stopped minting the Diocletianic \"pure\" silver argenteus soon after 305, while the \"billon\" currency continued to be used until the 360s. From the early 300s on, Constantine forsook any attempts at restoring the silver currency, preferring instead to concentrate on minting large quantities of the gold solidus, 72 of which made a pound of gold. New and highly debased silver pieces continued to be issued during his later reign and after his death, in a continuous process of retariffing, until this \"billon\" minting ceased in 367, and the silver piece was continued by various denominations of bronze coins, the most important being the centenionalis.",
"title": "Reign"
},
{
"paragraph_id": 58,
"text": "These bronze pieces continued to be devalued, assuring the possibility of keeping fiduciary minting alongside a gold standard. The author of De Rebus Bellicis held that the rift widened between classes because of this monetary policy; the rich benefited from the stability in purchasing power of the gold piece, while the poor had to cope with ever-degrading bronze pieces. Later emperors such as Julian the Apostate insisted on trustworthy mintings of the bronze currency.",
"title": "Reign"
},
{
"paragraph_id": 59,
"text": "Constantine's monetary policies were closely associated with his religious policies; increased minting was associated with the confiscation of all gold, silver, and bronze statues from pagan temples between 331 and 336 which were declared to be imperial property. Two imperial commissioners for each province had the task of getting the statues and melting them for immediate minting, with the exception of a number of bronze statues that were used as public monuments in Constantinople.",
"title": "Reign"
},
{
"paragraph_id": 60,
"text": "Constantine had his eldest son Crispus seized and put to death by \"cold poison\" at Pola (Pula, Croatia) sometime between 15 May and 17 June 326. In July, he had his wife Empress Fausta (stepmother of Crispus) killed in an overheated bath. Their names were wiped from the face of many inscriptions, references to their lives were eradicated from the literary record, and their memory was condemned. Eusebius, for example, edited out any praise of Crispus from later copies of Historia Ecclesiastica, and his Vita Constantini contains no mention of Fausta or Crispus. Few ancient sources are willing to discuss possible motives for the events, and the few that do are of later provenance and are generally unreliable. At the time of the executions, it was commonly believed that Empress Fausta was either in an illicit relationship with Crispus or was spreading rumors to that effect. A popular myth arose, modified to allude to the Hippolytus–Phaedra legend, with the suggestion that Constantine killed Crispus and Fausta for their immoralities; the largely fictional Passion of Artemius explicitly makes this connection. The myth rests on slim evidence as an interpretation of the executions; only late and unreliable sources allude to the relationship between Crispus and Fausta, and there is no evidence for the modern suggestion that Constantine's \"godly\" edicts of 326 and the irregularities of Crispus are somehow connected.",
"title": "Reign"
},
{
"paragraph_id": 61,
"text": "Although Constantine created his apparent heirs \"caesars\", following a pattern established by Diocletian, he gave his creations a hereditary character, alien to the tetrarchic system: Constantine's caesars were to be kept in the hope of ascending to empire and entirely subordinated to their augustus, as long as he was alive. Adrian Goldsworthy speculates an alternative explanation for the execution of Crispus was Constantine's desire to keep a firm grip on his prospective heirs, this—and Fausta's desire for having her sons inheriting instead of their half-brother—being reason enough for killing Crispus; the subsequent execution of Fausta, however, was probably meant as a reminder to her children that Constantine would not hesitate in \"killing his own relatives when he felt this was necessary\".",
"title": "Reign"
},
{
"paragraph_id": 62,
"text": "Constantine considered Constantinople his capital and permanent residence. He lived there for a good portion of his later life. In 328, construction was completed on Constantine's Bridge at Sucidava, (today Celei in Romania) in hopes of reconquering Dacia, a province that had been abandoned under Aurelian. In the late winter of 332, Constantine campaigned with the Sarmatians against the Goths. The weather and lack of food reportedly cost the Goths dearly before they submitted to Rome. In 334, after Sarmatian commoners had overthrown their leaders, Constantine led a campaign against the tribe. He won a victory in the war and extended his control over the region, as remains of camps and fortifications in the region indicate. Constantine resettled some Sarmatian exiles as farmers in Illyrian and Roman districts and conscripted the rest into the army. The new frontier in Dacia was along the Brazda lui Novac line supported by new castra. Constantine took the title Dacicus maximus in 336.",
"title": "Reign"
},
{
"paragraph_id": 63,
"text": "In the last years of his life, Constantine made plans for a campaign against Persia. In a letter written to the king of Persia, Shapur, Constantine had asserted his patronage over Persia's Christian subjects and urged Shapur to treat them well. The letter is undatable. In response to border raids, Constantine sent Constantius to guard the eastern frontier in 335. In 336, Prince Narseh invaded Armenia (a Christian kingdom since 301) and installed a Persian client on the throne. Constantine then resolved to campaign against Persia. He treated the war as a Christian crusade, calling for bishops to accompany the army and commissioning a tent in the shape of a church to follow him everywhere. Constantine planned to be baptised in the Jordan River before crossing into Persia. Persian diplomats came to Constantinople over the winter of 336–337, seeking peace, but Constantine turned them away. The campaign was called off, however, when Constantine became sick in the spring of 337.",
"title": "Reign"
},
{
"paragraph_id": 64,
"text": "From his recent illness, Constantine knew death would soon come. Within the Church of the Holy Apostles, Constantine had secretly prepared a final resting-place for himself. It came sooner than he had expected. Soon after the Feast of Easter 337, Constantine fell seriously ill. He left Constantinople for the hot baths near his mother's city of Helenopolis (Altınova), on the southern shores of the Gulf of Nicomedia (present-day Gulf of İzmit). There, in a church his mother built in honor of Lucian the Martyr, he prayed, and there he realised that he was dying. Seeking purification, he became a catechumen and attempted a return to Constantinople, making it only as far as a suburb of Nicomedia. He summoned the bishops and told them of his hope to be baptised in the River Jordan, where Christ was written to have been baptised. He requested the baptism right away, promising to live a more Christian life should he live through his illness. The bishops, Eusebius records, \"performed the sacred ceremonies according to custom\". He chose the Arianizing bishop Eusebius of Nicomedia, bishop of the city where he lay dying, as his baptizer. In postponing his baptism, he followed one custom at the time which postponed baptism until after infancy. It has been thought that Constantine put off baptism as long as he did so as to be absolved from as much of his sin as possible. Constantine died soon after at a suburban villa called Achyron, on the last day of the fifty-day festival of Pentecost directly following Pascha (or Easter), on 22 May 337.",
"title": "Reign"
},
{
"paragraph_id": 65,
"text": "Although Constantine's death follows the conclusion of the Persian campaign in Eusebius's account, most other sources report his death as occurring in its middle. Emperor Julian (a nephew of Constantine), writing in the mid-350s, observes that the Sassanians escaped punishment for their ill-deeds, because Constantine died \"in the middle of his preparations for war\". Similar accounts are given in the Origo Constantini, an anonymous document composed while Constantine was still living, which has Constantine dying in Nicomedia; the Historiae abbreviatae of Sextus Aurelius Victor, written in 361, which has Constantine dying at an estate near Nicomedia called Achyrona while marching against the Persians; and the Breviarium of Eutropius, a handbook compiled in 369 for the Emperor Valens, which has Constantine dying in a nameless state villa in Nicomedia. From these and other accounts, some have concluded that Eusebius's Vita was edited to defend Constantine's reputation against what Eusebius saw as a less congenial version of the campaign.",
"title": "Reign"
},
{
"paragraph_id": 66,
"text": "Following his death, his body was transferred to Constantinople and buried in the Church of the Holy Apostles, in a porphyry sarcophagus that was described in the 10th century by Constantine VII Porphyrogenitus in the De Ceremoniis. His body survived the plundering of the city during the Fourth Crusade in 1204 but was destroyed at some point afterwards. Constantine was succeeded by his three sons born of Fausta, Constantine II, Constantius II and Constans. His sons, along with his nephew Dalmatius, had already received one division of the empire each to administer as caesars; Constantine may have intended his successors to resume a structure akin to Diocletian's Tetrarchy. A number of relatives were killed by followers of Constantius, notably Constantine's nephews Dalmatius (who held the rank of caesar) and Hannibalianus, presumably to eliminate possible contenders to an already complicated succession. He also had two daughters, Constantina and Helena, wife of Emperor Julian.",
"title": "Reign"
},
{
"paragraph_id": 67,
"text": "Constantine reunited the empire under one emperor, and he won major victories over the Franks and Alamanni in 306–308, the Franks again in 313–314, the Goths in 332, and the Sarmatians in 334. By 336, he had reoccupied most of the long-lost province of Dacia which Aurelian had been forced to abandon in 271. At the time of his death, he was planning a great expedition to end raids on the eastern provinces from the Persian Empire.",
"title": "Assessment and legacy"
},
{
"paragraph_id": 68,
"text": "In the cultural sphere, Constantine revived the clean-shaven face fashion of earlier emperors, originally introduced among the Romans by Scipio Africanus (236–183 BC) and changed into the wearing of the beard by Hadrian (r. 117–138). This new Roman imperial fashion lasted until the reign of Phocas (r. 602–610) in the 7th century.",
"title": "Assessment and legacy"
},
{
"paragraph_id": 69,
"text": "The Holy Roman Empire reckoned Constantine among the venerable figures of its tradition. In the later Byzantine state, it became a great honor for an emperor to be hailed as a \"new Constantine\"; ten emperors carried the name, including the last emperor of the Eastern Roman Empire. Charlemagne used monumental Constantinian forms in his court to suggest that he was Constantine's successor and equal. Charlemagne, Henry VIII, Philip II of Spain, Godfrey of Bouillon, House of Capet, House of Habsburg, House of Stuart, Macedonian dynasty and Phokas family claimed descent from Constantine. Geoffrey of Monmouth embroidered a tale that the legendary king of Britain, King Arthur, was also a descendant of Constantine. Constantine acquired a mythic role as a hero and warrior against heathens. His reception as a saint seems to have spread within the Byzantine empire during wars against the Sasanian Persians and the Muslims in the late 6th and 7th century. The motif of the Romanesque equestrian, the mounted figure in the posture of a triumphant Roman emperor, became a visual metaphor in statuary in praise of local benefactors. The name \"Constantine\" enjoyed renewed popularity in western France in the 11th and 12th centuries.",
"title": "Assessment and legacy"
},
{
"paragraph_id": 70,
"text": "The Niš Constantine the Great Airport is named in honor of him. A large cross was planned to be built on a hill overlooking Niš, but the project was cancelled. In 2012, a memorial was erected in Niš in his honor. The Commemoration of the Edict of Milan was held in Niš in 2013. The Orthodox Church considers Constantine a saint (Άγιος Κωνσταντίνος, Saint Constantine), having a feast day on 21 May, and calls him isapostolos (ισαπόστολος Κωνσταντίνος)—an equal of the Apostles.",
"title": "Assessment and legacy"
},
{
"paragraph_id": 71,
"text": "During Constantine's lifetime, Praxagoras of Athens and Libanius, pagan authors, showered Constantine with praise, presenting him as a paragon of virtue. His nephew and son-in-law Julian the Apostate, however, wrote the satire Symposium, or the Saturnalia in 361, after the last of his sons died; it denigrated Constantine, calling him inferior to the great pagan emperors, and given over to luxury and greed. Following Julian, Eunapius began – and Zosimus continued – a historiographic tradition that blamed Constantine for weakening the empire through his indulgence to the Christians.",
"title": "Assessment and legacy"
},
{
"paragraph_id": 72,
"text": "During the Middle Ages, European and Near-East Byzantine writers presented Constantine as an ideal ruler, the standard against which any king or emperor could be measured. The Renaissance rediscovery of anti-Constantinian sources prompted a re-evaluation of his career. German humanist Johannes Leunclavius discovered Zosimus' writings and published a Latin translation in 1576. In its preface, he argues that Zosimus' picture of Constantine offered a more balanced view than that of Eusebius and the Church historians. Cardinal Caesar Baronius criticised Zosimus, favouring Eusebius' account of the Constantinian era. Baronius' Life of Constantine (1588) presents Constantine as the model of a Christian prince. Edward Gibbon aimed to unite the two extremes of Constantinian scholarship in his work The History of the Decline and Fall of the Roman Empire (1776–89) by contrasting the portraits presented by Eusebius and Zosimus. He presents a noble war hero who transforms into an Oriental despot in his old age, \"degenerating into a cruel and dissolute monarch\".",
"title": "Assessment and legacy"
},
{
"paragraph_id": 73,
"text": "Modern interpretations of Constantine's rule begin with Jacob Burckhardt's The Age of Constantine the Great (1853, rev. 1880). Burckhardt's Constantine is a scheming secularist, a politician who manipulates all parties in a quest to secure his own power. Henri Grégoire followed Burckhardt's evaluation of Constantine in the 1930s, suggesting that Constantine developed an interest in Christianity only after witnessing its political usefulness. Grégoire was skeptical of the authenticity of Eusebius' Vita, and postulated a pseudo-Eusebius to assume responsibility for the vision and conversion narratives of that work. Otto Seeck's Geschichte des Untergangs der antiken Welt (1920–23) and André Piganiol's L'empereur Constantin (1932) go against this historiographic tradition. Seeck presents Constantine as a sincere war hero whose ambiguities were the product of his own naïve inconsistency. Piganiol's Constantine is a philosophical monotheist, a child of his era's religious syncretism. Related histories by Arnold Hugh Martin Jones (Constantine and the Conversion of Europe, 1949) and Ramsay MacMullen (Constantine, 1969) give portraits of a less visionary and more impulsive Constantine.",
"title": "Assessment and legacy"
},
{
"paragraph_id": 74,
"text": "These later accounts were more willing to present Constantine as a genuine convert to Christianity. Norman H. Baynes began a historiographic tradition with Constantine the Great and the Christian Church (1929) which presents Constantine as a committed Christian, reinforced by Andreas Alföldi's The Conversion of Constantine and Pagan Rome (1948), and Timothy Barnes's Constantine and Eusebius (1981) is the culmination of this trend. Barnes' Constantine experienced a radical conversion which drove him on a personal crusade to convert his empire. Charles Matson Odahl's Constantine and the Christian Empire (2004) takes much the same tack. In spite of Barnes' work, arguments continue over the strength and depth of Constantine's religious conversion. Certain themes in this school reached new extremes in T.G. Elliott's The Christianity of Constantine the Great (1996), which presented Constantine as a committed Christian from early childhood. Paul Veyne's 2007 work Quand notre monde est devenu chrétien holds a similar view which does not speculate on the origin of Constantine's Christian motivation, but presents him as a religious revolutionary who fervently believed that he was meant \"to play a providential role in the millenary economy of the salvation of humanity\".",
"title": "Assessment and legacy"
},
{
"paragraph_id": 75,
"text": "Latin Christians considered it inappropriate that Constantine was baptised only on his death bed by an unorthodox bishop, and a legend emerged by the early 4th century that Pope Sylvester I had cured the pagan emperor from leprosy. According to this legend, Constantine was baptised and began the construction of a church in the Lateran Basilica. The Donation of Constantine appeared in the 8th century, most likely during the pontificate of Pope Stephen II, in which the freshly converted Constantine gives \"the city of Rome and all the provinces, districts, and cities of Italy and the Western regions\" to Sylvester and his successors. In the High Middle Ages, this document was used and accepted as the basis for the pope's temporal power, though it was denounced as a forgery by Emperor Otto III and lamented as the root of papal worldliness by Dante Alighieri. Philologist and Catholic priest Lorenzo Valla proved in 1440 that the document was indeed a forgery.",
"title": "Assessment and legacy"
},
{
"paragraph_id": 76,
"text": "During the medieval period, Britons regarded Constantine as a king of their own people, particularly associating him with Caernarfon in Gwynedd. While some of this is owed to his fame and his proclamation as emperor in Britain, there was also confusion of his family with Magnus Maximus's supposed wife Elen and her son, another Constantine (Welsh: Custennin). In the 12th century Henry of Huntingdon included a passage in his Historia Anglorum that the Emperor Constantine's mother was a Briton, making her the daughter of King Cole of Colchester. Geoffrey of Monmouth expanded this story in his highly fictionalised Historia Regum Britanniae, an account of the supposed Kings of Britain from their Trojan origins to the Anglo-Saxon invasion. According to Geoffrey, Cole was King of the Britons when Constantius, here a senator, came to Britain. Afraid of the Romans, Cole submits to Roman law so long as he retains his kingship. However, he dies only a month later, and Constantius takes the throne himself, marrying Cole's daughter Helena. They have their son Constantine, who succeeds his father as King of Britain before becoming Roman emperor.",
"title": "Assessment and legacy"
},
{
"paragraph_id": 77,
"text": "Historically, this series of events is extremely improbable. Constantius had already left Helena by the time he left for Britain. Additionally, no earlier source mentions that Helena was born in Britain, let alone that she was a princess. Henry's source for the story is unknown, though it may have been a lost hagiography of Helena.",
"title": "Assessment and legacy"
},
{
"paragraph_id": 78,
"text": "",
"title": "External links"
}
] | Constantine I, also known as Constantine the Great, was a Roman emperor from AD 306 to 337. He was also the first emperor to convert to Christianity. Born in Naissus, Dacia Mediterranea, he was the son of Flavius Constantius, a Roman army officer of Illyrian origin who had been one of the four rulers of the Tetrarchy. His mother, Helena, was a Greek woman of low birth and a Christian. Later canonised as a saint, she is traditionally attributed to the conversion of her son. Constantine served with distinction under the Roman emperors Diocletian and Galerius. He began his career by campaigning in the eastern provinces before being recalled in the west to fight alongside his father in the province of Britannia. After his father's death in 306, Constantine was acclaimed as augustus (emperor) by his army at Eboracum. He eventually emerged victorious in the civil wars against emperors Maxentius and Licinius to become the sole ruler of the Roman Empire by 324. Upon his ascension, Constantine enacted numerous reforms to strengthen the empire. He restructured the government, separating civil and military authorities. To combat inflation, he introduced the solidus, a new gold coin that became the standard for Byzantine and European currencies for more than a thousand years. The Roman army was reorganised to consist of mobile units (comitatenses), often around the Emperor, to serve on campaigns against external enemies or Roman rebels, and frontier-garrison troops (limitanei) which were capable of countering barbarian raids, but less and less capable, over time, of countering full-scale barbarian invasions. Constantine pursued successful campaigns against the tribes on the Roman frontiers—such as the Franks, the Alemanni, the Goths, and the Sarmatians—and resettled territories abandoned by his predecessors during the Crisis of the Third Century with citizens of Roman culture. Although Constantine lived much of his life as a pagan and later as a catechumen, he began to favour Christianity beginning in 312, finally becoming a Christian and being baptised by either Eusebius of Nicomedia, an Arian bishop, or by Pope Sylvester I, which is maintained by the Catholic Church and the Coptic Orthodox Church. He played an influential role in the proclamation of the Edict of Milan in 313, which declared tolerance for Christianity in the Roman Empire. He convoked the First Council of Nicaea in 325 which produced the statement of Christian belief known as the Nicene Creed. The Church of the Holy Sepulchre was built on his orders at the purported site of Jesus' tomb in Jerusalem and was deemed the holiest place in all of Christendom. The papal claim to temporal power in the High Middle Ages was based on the fabricated Donation of Constantine. He has historically been referred to as the "First Christian Emperor," but while he did favour the Christian Church, some modern scholars debate his beliefs and even his comprehension of Christianity. Nevertheless, he is venerated as a saint in Eastern Christianity, and he did much to push Christianity towards the mainstream of Roman culture. The age of Constantine marked a distinct epoch in the history of the Roman Empire and a pivotal moment in the transition from classical antiquity to the Middle Ages. He built a new imperial residence in the city of Byzantium and renamed it New Rome, later adopting the name Constantinople after himself, where it was located in modern Istanbul. It subsequently became the capital of the empire for more than a thousand years, the later Eastern Roman Empire often being referred to in English as the Byzantine Empire, a term never used by the Empire, invented by German historian Hieronymus Wolf. His more immediate political legacy was that he replaced Diocletian's Tetrarchy with the de facto principle of dynastic succession by leaving the empire to his sons and other members of the Constantinian dynasty. His reputation flourished during the lifetime of his children and for centuries after his reign. The medieval church held him up as a paragon of virtue, while secular rulers invoked him as a prototype, a point of reference, and the symbol of imperial legitimacy and identity. At the beginning of the Renaissance, there were more critical appraisals of his reign with the rediscovery of anti-Constantinian sources. Trends in modern and recent scholarship have attempted to balance the extremes of previous scholarship. | 2001-10-22T20:35:27Z | 2023-12-31T01:36:41Z | [
"Template:Redirect",
"Template:Tree chart/start",
"Template:Refbegin",
"Template:S-reg",
"Template:Use dmy dates",
"Template:Spnd",
"Template:Sfn",
"Template:S-aft",
"Template:Campaignbox Constantine Wars",
"Template:Main",
"Template:Wikiquote",
"Template:S-off",
"Template:History of the Catholic Church",
"Template:Authority control",
"Template:Integralism",
"Template:Cite Pauly",
"Template:Good article",
"Template:Citation needed",
"Template:Chart top",
"Template:Tree chart",
"Template:Multiple image",
"Template:Further",
"Template:Constantinian dynasty family tree",
"Template:Notelist",
"Template:Roman emperors",
"Template:Lang",
"Template:Chart bottom",
"Template:Portal",
"Template:Cite journal",
"Template:TOC limit",
"Template:Infobox saint",
"Template:Break",
"Template:Reflist",
"Template:-",
"Template:S-hou",
"Template:Doi",
"Template:See also",
"Template:Tree chart/end",
"Template:Webarchive",
"Template:EngvarB",
"Template:Efn",
"Template:Cite web",
"Template:S-bef",
"Template:S-ttl",
"Template:S-end",
"Template:ISBN",
"Template:Commons",
"Template:Convert",
"Template:Citation",
"Template:Refend",
"Template:Circa",
"Template:Clear",
"Template:Nowrap",
"Template:Cite book",
"Template:Library resources box",
"Template:Short description",
"Template:Infobox royalty",
"Template:Cite news",
"Template:S-start"
] | https://en.wikipedia.org/wiki/Constantine_the_Great |
7,237 | Common Language Infrastructure | The Common Language Infrastructure (CLI) is an open specification and technical standard originally developed by Microsoft and standardized by ISO/IEC (ISO/IEC 23271) and Ecma International (ECMA 335) that describes executable code and a runtime environment that allows multiple high-level languages to be used on different computer platforms without being rewritten for specific architectures. This implies it is platform agnostic. The .NET Framework, .NET and Mono are implementations of the CLI. The metadata format is also used to specify the API definitions exposed by the Windows Runtime.
Among other things, the CLI specification describes the following five aspects:
In August 2000, Microsoft, Hewlett-Packard, Intel, and others worked to standardize CLI. By December 2001, it was ratified by the Ecma, with ISO/IEC standardization following in April 2003.
Microsoft and its partners hold patents for CLI. Ecma and ISO/IEC require that all patents essential to implementation be made available under "reasonable and non-discriminatory (RAND) terms." It is common for RAND licensing to require some royalty payment, which could be a cause for concern with Mono. As of January 2013, neither Microsoft nor its partners have identified any patents essential to CLI implementations subject to RAND terms.
As of July 2009, Microsoft added C# and CLI to the list of specifications that the Microsoft Community Promise applies to, so anyone can safely implement specified editions of the standards without fearing a patent lawsuit from Microsoft. To implement the CLI standard requires conformance to one of the supported and defined profiles of the standard, the minimum of which is the kernel profile. The kernel profile is actually a very small set of types to support in comparison to the well known core library of default .NET installations. However, the conformance clause of the CLI allows for extending the supported profile by adding new methods and types to classes, as well as deriving from new namespaces. But it does not allow for adding new members to interfaces. This means that the features of the CLI can be used and extended, as long as the conforming profile implementation does not change the behavior of a program intended to run on that profile, while allowing for unspecified behavior from programs written specifically for that implementation.
In 2012, Ecma and ISO/IEC published the new edition of the CLI standard. | [
{
"paragraph_id": 0,
"text": "The Common Language Infrastructure (CLI) is an open specification and technical standard originally developed by Microsoft and standardized by ISO/IEC (ISO/IEC 23271) and Ecma International (ECMA 335) that describes executable code and a runtime environment that allows multiple high-level languages to be used on different computer platforms without being rewritten for specific architectures. This implies it is platform agnostic. The .NET Framework, .NET and Mono are implementations of the CLI. The metadata format is also used to specify the API definitions exposed by the Windows Runtime.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Among other things, the CLI specification describes the following five aspects:",
"title": "Overview"
},
{
"paragraph_id": 2,
"text": "In August 2000, Microsoft, Hewlett-Packard, Intel, and others worked to standardize CLI. By December 2001, it was ratified by the Ecma, with ISO/IEC standardization following in April 2003.",
"title": "Standardization and licensing"
},
{
"paragraph_id": 3,
"text": "Microsoft and its partners hold patents for CLI. Ecma and ISO/IEC require that all patents essential to implementation be made available under \"reasonable and non-discriminatory (RAND) terms.\" It is common for RAND licensing to require some royalty payment, which could be a cause for concern with Mono. As of January 2013, neither Microsoft nor its partners have identified any patents essential to CLI implementations subject to RAND terms.",
"title": "Standardization and licensing"
},
{
"paragraph_id": 4,
"text": "As of July 2009, Microsoft added C# and CLI to the list of specifications that the Microsoft Community Promise applies to, so anyone can safely implement specified editions of the standards without fearing a patent lawsuit from Microsoft. To implement the CLI standard requires conformance to one of the supported and defined profiles of the standard, the minimum of which is the kernel profile. The kernel profile is actually a very small set of types to support in comparison to the well known core library of default .NET installations. However, the conformance clause of the CLI allows for extending the supported profile by adding new methods and types to classes, as well as deriving from new namespaces. But it does not allow for adding new members to interfaces. This means that the features of the CLI can be used and extended, as long as the conforming profile implementation does not change the behavior of a program intended to run on that profile, while allowing for unspecified behavior from programs written specifically for that implementation.",
"title": "Standardization and licensing"
},
{
"paragraph_id": 5,
"text": "In 2012, Ecma and ISO/IEC published the new edition of the CLI standard.",
"title": "Standardization and licensing"
},
{
"paragraph_id": 6,
"text": "",
"title": "External links"
}
] | The Common Language Infrastructure (CLI) is an open specification and technical standard originally developed by Microsoft and standardized by ISO/IEC and Ecma International that describes executable code and a runtime environment that allows multiple high-level languages to be used on different computer platforms without being rewritten for specific architectures. This implies it is platform agnostic. The .NET Framework, .NET and Mono are implementations of the CLI.
The metadata format is also used to specify the API definitions exposed by the Windows Runtime. | 2001-11-26T01:18:46Z | 2023-12-12T06:31:13Z | [
"Template:Use mdy dates",
"Template:Short description",
"Template:Infobox technology standard",
"Template:Cite web",
"Template:Ecma International Standards",
"Template:ISO standards",
"Template:Use American English",
"Template:Reflist",
"Template:Official",
"Template:Common Language Infrastructure"
] | https://en.wikipedia.org/wiki/Common_Language_Infrastructure |
7,239 | Cricket World Cup | The Cricket World Cup (officially known as ICC Men's Cricket World Cup) is the international championship of One Day International (ODI) cricket. The event is organised by the sport's governing body, the International Cricket Council (ICC), every four years, with preliminary qualification rounds leading up to a finals tournament. The tournament is one of the world's most viewed sporting events and considered as the "flagship event of the international cricket calendar" by the ICC. It is widely considered the pinnacle championship of the sport of cricket.
The first World Cup was organised in England in June 1975, with the first ODI cricket match having been played only four years earlier. However, a separate Women's Cricket World Cup had been held two years before the first men's tournament, and a tournament involving multiple international teams had been held as early as 1912, when a triangular tournament of Test matches was played between Australia, England and South Africa. The first three World Cups were held in England. From the 1987 tournament onwards, hosting has been shared between countries under an unofficial rotation system, with fourteen ICC members having hosted at least one match in the tournament.
The current format involves a qualification phase, which takes place over the preceding three years, to determine which teams qualify for the tournament phase. In the tournament phase, 10 teams, including the automatically qualifying host nation, compete for the title at venues within the host nation over about a month. In the 2027 edition, the format will be changed to accommodate an expanded 14-team final competition.
A total of twenty teams have competed in the 13 editions of the tournament, with ten teams competing in the recent 2023 tournament. Australia has won the tournament six times, India and West Indies twice each, while Pakistan, Sri Lanka and England have won it once each. The best performance by a non-full-member team came when Kenya made the semi-finals of the 2003 tournament.
Australia is the current champion after winning the 2023 World Cup in India. The subsequent 2027 World Cup will be held jointly in South Africa, Zimbabwe, and Namibia.
The first international cricket match was played between Canada and the United States, on 24 and 25 September 1844. However, the first credited Test match was played in 1877 between Australia and England, and the two teams competed regularly for The Ashes in subsequent years. South Africa was admitted to Test status in 1889. Representative cricket teams were selected to tour each other, resulting in bilateral competition. Cricket was also included as an Olympic sport at the 1900 Paris Games, where Great Britain defeated France to win the gold medal. This was the only appearance of cricket at the Summer Olympics.
The first multilateral competition at international level was the 1912 Triangular Tournament, a Test cricket tournament played in England between all three Test-playing nations at the time: England, Australia and South Africa. The event was not a success: the summer was exceptionally wet, making play difficult on damp uncovered pitches, and crowd attendances were poor, attributed to a "surfeit of cricket". Since then, international Test cricket has generally been organised as bilateral series: a multilateral Test tournament was not organised again until the triangular Asian Test Championship in 1999.
The number of nations playing Test cricket increased gradually over time, with the addition of West Indies in 1928, New Zealand in 1930, India in 1932, and Pakistan in 1952. However, international cricket continued to be played as bilateral Test matches over three, four or five days.
In the early 1960s, English county cricket teams began playing a shortened version of cricket which only lasted for one day. Starting in 1962 with a four-team knockout competition known as the Midlands Knock-Out Cup, and continuing with the inaugural Gillette Cup in 1963, one-day cricket grew in popularity in England. A national Sunday League was formed in 1969. The first One-Day International match was played on the fifth day of a rain-aborted Test match between England and Australia at Melbourne in 1971, to fill the time available and as compensation for the frustrated crowd. It was a forty over game with eight balls per over. The success and popularity of the domestic one-day competitions in England and other parts of the world, as well as the early One-Day Internationals, prompted the ICC to consider organizing a Cricket World Cup.
The inaugural Cricket World Cup was hosted in 1975 by England, the only nation able to put forward the resources to stage an event of such magnitude at the time. The first three tournaments were held in England and officially known as the Prudential Cup after the sponsors Prudential plc. The matches consisted of 60 six-ball overs per team, played during daytime in the traditional form, with the players wearing cricket whites and using red cricket balls.
Eight teams participated in the first tournament: Australia, England, India, New Zealand, Pakistan, and the West Indies (the six Test nations at the time), together with Sri Lanka and a composite team from East Africa. One notable omission was South Africa, who were banned from international cricket due to apartheid. The tournament was won by the West Indies, who defeated Australia by 17 runs in the final at Lord's. Roy Fredricks of West Indies was the first batsmen who got hit-wicket in ODI during the 1975 World Cup final.
The 1979 World Cup saw the introduction of the ICC Trophy competition to select non-Test playing teams for the World Cup, with Sri Lanka and Canada qualifying. The West Indies won a second consecutive World Cup tournament, defeating the hosts England by 92 runs in the final. At a meeting which followed the World Cup, the International Cricket Conference agreed to make the competition a quadrennial event.
The 1983 event was hosted by England for a third consecutive time. By this stage, Sri Lanka had become a Test-playing nation, and Zimbabwe qualified through the ICC Trophy. A fielding circle was introduced, 30 yards (27 m) away from the stumps. Four fieldsmen needed to be inside it at all times. The teams faced each other twice, before moving into the knock-outs. India was crowned champions after upsetting the West Indies by 43 runs in the final.
India and Pakistan jointly hosted the 1987 tournament, the first time that the competition was held outside England. The games were reduced from 60 to 50 overs per innings, the current standard, because of the shorter daylight hours in the Indian subcontinent compared with England's summer. Australia won the championship by defeating England by 7 runs in the final, the closest margin in the World Cup final until the 2019 edition between England and New Zealand. The 1992 World Cup, held in Australia and New Zealand, introduced many changes to the game, such as coloured clothing, white balls, day/night matches, and a change to the fielding restriction rules. The South African cricket team participated in the event for the first time, following the fall of the apartheid regime and the end of the international sports boycott. Pakistan overcame a dismal start in the tournament to eventually defeat England by 22 runs in the final and emerge as winners.
The 1996 championship was held in the Indian subcontinent for a second time, with the inclusion of Sri Lanka as host for some of its group stage matches. In the semi-final, Sri Lanka, heading towards a crushing victory over India at Eden Gardens after the hosts lost eight wickets while scoring 120 runs in pursuit of 252, were awarded victory by default after crowd unrest broke out in protest against the Indian performance. Sri Lanka went on to win their maiden championship by defeating Australia by seven wickets in the final at Lahore.
In 1999, the event was hosted by England, with some matches also being held in Scotland, Ireland, Wales and the Netherlands. Twelve teams contested the World Cup. Australia qualified for the semi-finals after reaching their target in their Super 6 match against South Africa off the final over of the match. They then proceeded to the final with a tied match in the semi-final also against South Africa where a mix-up between South African batsmen Lance Klusener and Allan Donald saw Donald drop his bat and stranded mid-pitch to be run out. In the final, Australia dismissed Pakistan for 132 and then reached the target in less than 20 overs and with eight wickets in hand.
South Africa, Zimbabwe and Kenya hosted the 2003 World Cup. The number of teams participating in the event increased from twelve to fourteen. Kenya's victories over Sri Lanka and Zimbabwe, among others – and a forfeit by the New Zealand team, which refused to play in Kenya because of security concerns – enabled Kenya to reach the semi-finals, the best result by an associate. In the final, Australia made 359 runs for the loss of two wickets, the largest ever total in a final, defeating India by 125 runs.
In 2007, the tournament was hosted by the West Indies and expanded to sixteen teams. Following Pakistan's upset loss to World Cup debutants Ireland in the group stage, Pakistani coach Bob Woolmer was found dead in his hotel room. Jamaican police had initially launched a murder investigation into Woolmer's death but later confirmed that he died of heart failure. Australia defeated Sri Lanka in the final by 53 runs (D/L) in farcical light conditions, and extended their undefeated run in the World Cup to 29 matches and winning three straight championships.
India, Sri Lanka and Bangladesh together hosted the 2011 World Cup. Pakistan was stripped of its hosting rights following the terrorist attack on the Sri Lankan cricket team in 2009, with the games originally scheduled for Pakistan redistributed to the other host countries. The number of teams participating in the World Cup was reduced to fourteen. Australia lost their final group stage match against Pakistan on 19 March 2011, ending an unbeaten streak of 35 World Cup matches, which had begun on 23 May 1999. India won their second World Cup title by beating Sri Lanka by 6 wickets in the final at Wankhede Stadium in Mumbai, making India the first country to win the World Cup at home. This was also the first time that two Asian countries faced each other in a World Cup Final.
Australia and New Zealand jointly hosted the 2015 World Cup. The number of participants remained at fourteen. Ireland was the most successful Associate nation with a total of three wins in the tournament. New Zealand beat South Africa in a thrilling first semi-final to qualify for their maiden World Cup final. Australia defeated New Zealand by seven wickets in the final at Melbourne to lift the World Cup for the fifth time.
The 2019 World Cup was hosted by England and Wales. The number of participants was reduced to 10. New Zealand defeated India in the first semi-final, which was pushed over to the reserve day due to rain. England defeated the defending champions, Australia, in the second semi-final. Neither finalist had previously won the World Cup. In the final, the scores were tied at 241 after 50 overs and the match went to a super over, after which the scores were again tied at 15. The World Cup was won by England, whose boundary count was greater than New Zealand's.
From the first World Cup in 1975 up to the 2019 World Cup, the majority of teams taking part qualified automatically. Until the 2015 World Cup this was mostly through having Full Membership of the ICC, and for the 2019 World Cup this was mostly through ranking position in the ICC ODI Championship.
Since the second World Cup in 1979 up to the 2019 World Cup, the teams that qualified automatically were joined by a small number of others who qualified for the World Cup through the qualification process. The first qualifying tournament being the ICC Trophy; later the process expanding with pre-qualifying tournaments. For the 2011 World Cup, the ICC World Cricket League replaced the past pre-qualifying processes; and the name "ICC Trophy" was changed to "ICC Men's Cricket World Cup Qualifier". The World Cricket League was the qualification system provided to allow the Associate and Affiliate members of the ICC more opportunities to qualify. The number of teams qualifying varied throughout the years.
From the 2023 World Cup onwards, only the host nation(s) will qualify automatically. All countries will participate in a series of leagues to determine qualification, with automatic promotion and relegation between divisions from one World Cup cycle to the next.
The format of the Cricket World Cup has changed greatly over the course of its history. Each of the first four tournaments was played by eight teams, divided into two groups of four. The competition consisted of two stages, a group stage and a knock-out stage. The four teams in each group played each other in the round-robin group stage, with the top two teams in each group progressing to the semi-finals. The winners of the semi-finals played against each other in the final. With South Africa returning in the fifth tournament in 1992 as a result of the end of the apartheid boycott, nine teams played each other once in the group phase, and the top four teams progressed to the semi-finals. The tournament was further expanded in 1996, with two groups of six teams. The top four teams from each group progressed to quarter-finals and semi-finals.
A distinct format was used for the 1999 and 2003 World Cups. The teams were split into two pools, with the top three teams in each pool advancing to the Super 6. The Super 6 teams played the three other teams that advanced from the other group. As they advanced, the teams carried their points forward from previous matches against other teams advancing alongside them, giving them an incentive to perform well in the group stages. The top four teams from the Super 6 stage progressed to the semi-finals, with the winners playing in the final.
The format used in the 2007 World Cup involved 16 teams allocated into four groups of four. Within each group, the teams played each other in a round-robin format. Teams earned points for wins and half-points for ties. The top two teams from each group moved forward to the Super 8 round. The Super 8 teams played the other six teams that progressed from the different groups. Teams earned points in the same way as the group stage, but carried their points forward from previous matches against the other teams who qualified from the same group to the Super 8 stage. The top four teams from the Super 8 round advanced to the semi-finals, and the winners of the semi-finals played in the final.
The format used in the 2011 and 2015 World Cups featured two groups of seven teams, each playing in a round-robin format. The top four teams from each group proceeded to the knock out stage consisting of quarter-finals, semi-finals and ultimately the final.
In the 2019 and 2023 editions of the tournament, the number of teams participating dropped to 10. Each team is scheduled to play against each other once in a round robin format, before entering the semifinals, a similar format to the 1992 World Cup. The 2027 and 2031 World Cups will have 14 teams, with the format same as the 2003 edition.
The ICC Cricket World Cup Trophy is presented to the winners of the World Cup. The current trophy was created for the 1999 championships, and was the first permanent prize in the tournament's history. Prior to this, different trophies were made for each World Cup. The trophy was designed and produced in London by a team of craftsmen from Garrard & Co over a period of two months.
The current trophy is made from silver and gilt, and features a golden globe held up by three silver columns. The columns, shaped as stumps and bails, represent the three fundamental aspects of cricket: batting, bowling and fielding, while the globe characterises a cricket ball. The seam is tilted to symbolize the axial tilt of the Earth. It stands 60 centimetres (24 in) high and weighs approximately 11 kilograms (24 lb). The names of the previous winners are engraved on the base of the trophy, with space for a total of twenty inscriptions. The ICC keeps the original trophy. A replica differing only in the inscriptions is permanently awarded to the winning team.
The tournament is one of the world's most-viewed sporting events, and successive tournaments have generated increasing media attention as One-Day International cricket has become more established. The 2011 Cricket World Cup was televised in over 200 countries to over 2.2 billion viewers. Television rights, mainly for the 2011 and 2015 World Cup, were sold for over US$1.1 billion, and sponsorship rights were sold for a further US$500 million. The ICC claimed a total of 1.6 billion viewers for the 2019 World Cup as well as 4.6 billion views of digital video of the tournament. The most-watched match of the tournament was the group game between India and Pakistan, which was watched by more than 300 million people live.
The International Cricket Council's executive committee votes for the hosts of the tournament after examining the bids made by the nations keen to hold a Cricket World Cup.
England hosted the first three competitions. The ICC decided that England should host the first tournament because it was ready to devote the resources required to organising the inaugural event. India volunteered to host the third Cricket World Cup, but most ICC members preferred England as the longer period of daylight in England in June meant that a match could be completed in one day. The 1987 Cricket World Cup was held in India and Pakistan, the first hosted outside England.
Many of the tournaments have been jointly hosted by nations from the same geographical region, such as South Asia in 1987, 1996 and 2011, Australasia (in Australia and New Zealand) in 1992 and 2015, Southern Africa in 2003 and West Indies in 2007.
In November 2021, ICC published the name of the hosts for ICC events to be played between 2024 and 2031 cycle. The hosts for the 50-over World Cup along with T20 World Cup and Champions Trophy were selected through a competitive bidding process.
Twenty nations have qualified for the Cricket World Cup at least once. Six teams have competed in every tournament, five of which have won the title. The West Indies won the first two tournaments, Australia has won six, India has won two, while Pakistan, Sri Lanka and England have each won once. The West Indies (1975 and 1979) and Australia (1999, 2003 and 2007) are the only teams to have won consecutive titles. Australia has played in eight of the thirteen finals (1975, 1987, 1996, 1999, 2003, 2007, 2015 and 2023). New Zealand has yet to win the World Cup, but has been runners-up two times (2015 and 2019). The best result by a non-Test playing nation is the semi-final appearance by Kenya in the 2003 tournament; while the best result by a non-Test playing team on their debut is the Super 8 (second round) by Ireland in 2007.
Sri Lanka, as a co-host of the 1996 World Cup, was the first host to win the tournament, though the final was held in Pakistan. India won in 2011 as host and was the first team to win a final played in their own country. Australia and England repeated the feat in 2015 and 2019 respectively. Other than this, England made it to the final as a host in 1979. Other countries which have achieved or equalled their best World Cup results while co-hosting the tournament are New Zealand as finalists in 2015, Zimbabwe who reached the Super Six in 2003, and Kenya as semi-finalists in 2003. In 1987, co-hosts India and Pakistan both reached the semi-finals, but were eliminated by England and Australia respectively. Australia in 1992, England in 1999, South Africa in 2003, and Bangladesh in 2011 have been host teams that were eliminated in the first round.
An overview of the teams' performances in every World Cup is given below. For each tournament, the number of teams in each finals tournament (in brackets) are shown.
Legend
The table below provides an overview of the performances of teams over past World Cups, as of the end of the 2019 tournament. Teams are sorted by best performance, then by appearances, total number of wins, total number of games, and alphabetical order respectively.
Note: | [
{
"paragraph_id": 0,
"text": "The Cricket World Cup (officially known as ICC Men's Cricket World Cup) is the international championship of One Day International (ODI) cricket. The event is organised by the sport's governing body, the International Cricket Council (ICC), every four years, with preliminary qualification rounds leading up to a finals tournament. The tournament is one of the world's most viewed sporting events and considered as the \"flagship event of the international cricket calendar\" by the ICC. It is widely considered the pinnacle championship of the sport of cricket.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The first World Cup was organised in England in June 1975, with the first ODI cricket match having been played only four years earlier. However, a separate Women's Cricket World Cup had been held two years before the first men's tournament, and a tournament involving multiple international teams had been held as early as 1912, when a triangular tournament of Test matches was played between Australia, England and South Africa. The first three World Cups were held in England. From the 1987 tournament onwards, hosting has been shared between countries under an unofficial rotation system, with fourteen ICC members having hosted at least one match in the tournament.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The current format involves a qualification phase, which takes place over the preceding three years, to determine which teams qualify for the tournament phase. In the tournament phase, 10 teams, including the automatically qualifying host nation, compete for the title at venues within the host nation over about a month. In the 2027 edition, the format will be changed to accommodate an expanded 14-team final competition.",
"title": ""
},
{
"paragraph_id": 3,
"text": "A total of twenty teams have competed in the 13 editions of the tournament, with ten teams competing in the recent 2023 tournament. Australia has won the tournament six times, India and West Indies twice each, while Pakistan, Sri Lanka and England have won it once each. The best performance by a non-full-member team came when Kenya made the semi-finals of the 2003 tournament.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Australia is the current champion after winning the 2023 World Cup in India. The subsequent 2027 World Cup will be held jointly in South Africa, Zimbabwe, and Namibia.",
"title": ""
},
{
"paragraph_id": 5,
"text": "The first international cricket match was played between Canada and the United States, on 24 and 25 September 1844. However, the first credited Test match was played in 1877 between Australia and England, and the two teams competed regularly for The Ashes in subsequent years. South Africa was admitted to Test status in 1889. Representative cricket teams were selected to tour each other, resulting in bilateral competition. Cricket was also included as an Olympic sport at the 1900 Paris Games, where Great Britain defeated France to win the gold medal. This was the only appearance of cricket at the Summer Olympics.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "The first multilateral competition at international level was the 1912 Triangular Tournament, a Test cricket tournament played in England between all three Test-playing nations at the time: England, Australia and South Africa. The event was not a success: the summer was exceptionally wet, making play difficult on damp uncovered pitches, and crowd attendances were poor, attributed to a \"surfeit of cricket\". Since then, international Test cricket has generally been organised as bilateral series: a multilateral Test tournament was not organised again until the triangular Asian Test Championship in 1999.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "The number of nations playing Test cricket increased gradually over time, with the addition of West Indies in 1928, New Zealand in 1930, India in 1932, and Pakistan in 1952. However, international cricket continued to be played as bilateral Test matches over three, four or five days.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "In the early 1960s, English county cricket teams began playing a shortened version of cricket which only lasted for one day. Starting in 1962 with a four-team knockout competition known as the Midlands Knock-Out Cup, and continuing with the inaugural Gillette Cup in 1963, one-day cricket grew in popularity in England. A national Sunday League was formed in 1969. The first One-Day International match was played on the fifth day of a rain-aborted Test match between England and Australia at Melbourne in 1971, to fill the time available and as compensation for the frustrated crowd. It was a forty over game with eight balls per over. The success and popularity of the domestic one-day competitions in England and other parts of the world, as well as the early One-Day Internationals, prompted the ICC to consider organizing a Cricket World Cup.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "The inaugural Cricket World Cup was hosted in 1975 by England, the only nation able to put forward the resources to stage an event of such magnitude at the time. The first three tournaments were held in England and officially known as the Prudential Cup after the sponsors Prudential plc. The matches consisted of 60 six-ball overs per team, played during daytime in the traditional form, with the players wearing cricket whites and using red cricket balls.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "Eight teams participated in the first tournament: Australia, England, India, New Zealand, Pakistan, and the West Indies (the six Test nations at the time), together with Sri Lanka and a composite team from East Africa. One notable omission was South Africa, who were banned from international cricket due to apartheid. The tournament was won by the West Indies, who defeated Australia by 17 runs in the final at Lord's. Roy Fredricks of West Indies was the first batsmen who got hit-wicket in ODI during the 1975 World Cup final.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "The 1979 World Cup saw the introduction of the ICC Trophy competition to select non-Test playing teams for the World Cup, with Sri Lanka and Canada qualifying. The West Indies won a second consecutive World Cup tournament, defeating the hosts England by 92 runs in the final. At a meeting which followed the World Cup, the International Cricket Conference agreed to make the competition a quadrennial event.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "The 1983 event was hosted by England for a third consecutive time. By this stage, Sri Lanka had become a Test-playing nation, and Zimbabwe qualified through the ICC Trophy. A fielding circle was introduced, 30 yards (27 m) away from the stumps. Four fieldsmen needed to be inside it at all times. The teams faced each other twice, before moving into the knock-outs. India was crowned champions after upsetting the West Indies by 43 runs in the final.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "India and Pakistan jointly hosted the 1987 tournament, the first time that the competition was held outside England. The games were reduced from 60 to 50 overs per innings, the current standard, because of the shorter daylight hours in the Indian subcontinent compared with England's summer. Australia won the championship by defeating England by 7 runs in the final, the closest margin in the World Cup final until the 2019 edition between England and New Zealand. The 1992 World Cup, held in Australia and New Zealand, introduced many changes to the game, such as coloured clothing, white balls, day/night matches, and a change to the fielding restriction rules. The South African cricket team participated in the event for the first time, following the fall of the apartheid regime and the end of the international sports boycott. Pakistan overcame a dismal start in the tournament to eventually defeat England by 22 runs in the final and emerge as winners.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "The 1996 championship was held in the Indian subcontinent for a second time, with the inclusion of Sri Lanka as host for some of its group stage matches. In the semi-final, Sri Lanka, heading towards a crushing victory over India at Eden Gardens after the hosts lost eight wickets while scoring 120 runs in pursuit of 252, were awarded victory by default after crowd unrest broke out in protest against the Indian performance. Sri Lanka went on to win their maiden championship by defeating Australia by seven wickets in the final at Lahore.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "In 1999, the event was hosted by England, with some matches also being held in Scotland, Ireland, Wales and the Netherlands. Twelve teams contested the World Cup. Australia qualified for the semi-finals after reaching their target in their Super 6 match against South Africa off the final over of the match. They then proceeded to the final with a tied match in the semi-final also against South Africa where a mix-up between South African batsmen Lance Klusener and Allan Donald saw Donald drop his bat and stranded mid-pitch to be run out. In the final, Australia dismissed Pakistan for 132 and then reached the target in less than 20 overs and with eight wickets in hand.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "South Africa, Zimbabwe and Kenya hosted the 2003 World Cup. The number of teams participating in the event increased from twelve to fourteen. Kenya's victories over Sri Lanka and Zimbabwe, among others – and a forfeit by the New Zealand team, which refused to play in Kenya because of security concerns – enabled Kenya to reach the semi-finals, the best result by an associate. In the final, Australia made 359 runs for the loss of two wickets, the largest ever total in a final, defeating India by 125 runs.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "In 2007, the tournament was hosted by the West Indies and expanded to sixteen teams. Following Pakistan's upset loss to World Cup debutants Ireland in the group stage, Pakistani coach Bob Woolmer was found dead in his hotel room. Jamaican police had initially launched a murder investigation into Woolmer's death but later confirmed that he died of heart failure. Australia defeated Sri Lanka in the final by 53 runs (D/L) in farcical light conditions, and extended their undefeated run in the World Cup to 29 matches and winning three straight championships.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "India, Sri Lanka and Bangladesh together hosted the 2011 World Cup. Pakistan was stripped of its hosting rights following the terrorist attack on the Sri Lankan cricket team in 2009, with the games originally scheduled for Pakistan redistributed to the other host countries. The number of teams participating in the World Cup was reduced to fourteen. Australia lost their final group stage match against Pakistan on 19 March 2011, ending an unbeaten streak of 35 World Cup matches, which had begun on 23 May 1999. India won their second World Cup title by beating Sri Lanka by 6 wickets in the final at Wankhede Stadium in Mumbai, making India the first country to win the World Cup at home. This was also the first time that two Asian countries faced each other in a World Cup Final.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "Australia and New Zealand jointly hosted the 2015 World Cup. The number of participants remained at fourteen. Ireland was the most successful Associate nation with a total of three wins in the tournament. New Zealand beat South Africa in a thrilling first semi-final to qualify for their maiden World Cup final. Australia defeated New Zealand by seven wickets in the final at Melbourne to lift the World Cup for the fifth time.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "The 2019 World Cup was hosted by England and Wales. The number of participants was reduced to 10. New Zealand defeated India in the first semi-final, which was pushed over to the reserve day due to rain. England defeated the defending champions, Australia, in the second semi-final. Neither finalist had previously won the World Cup. In the final, the scores were tied at 241 after 50 overs and the match went to a super over, after which the scores were again tied at 15. The World Cup was won by England, whose boundary count was greater than New Zealand's.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "From the first World Cup in 1975 up to the 2019 World Cup, the majority of teams taking part qualified automatically. Until the 2015 World Cup this was mostly through having Full Membership of the ICC, and for the 2019 World Cup this was mostly through ranking position in the ICC ODI Championship.",
"title": "Format"
},
{
"paragraph_id": 22,
"text": "Since the second World Cup in 1979 up to the 2019 World Cup, the teams that qualified automatically were joined by a small number of others who qualified for the World Cup through the qualification process. The first qualifying tournament being the ICC Trophy; later the process expanding with pre-qualifying tournaments. For the 2011 World Cup, the ICC World Cricket League replaced the past pre-qualifying processes; and the name \"ICC Trophy\" was changed to \"ICC Men's Cricket World Cup Qualifier\". The World Cricket League was the qualification system provided to allow the Associate and Affiliate members of the ICC more opportunities to qualify. The number of teams qualifying varied throughout the years.",
"title": "Format"
},
{
"paragraph_id": 23,
"text": "From the 2023 World Cup onwards, only the host nation(s) will qualify automatically. All countries will participate in a series of leagues to determine qualification, with automatic promotion and relegation between divisions from one World Cup cycle to the next.",
"title": "Format"
},
{
"paragraph_id": 24,
"text": "The format of the Cricket World Cup has changed greatly over the course of its history. Each of the first four tournaments was played by eight teams, divided into two groups of four. The competition consisted of two stages, a group stage and a knock-out stage. The four teams in each group played each other in the round-robin group stage, with the top two teams in each group progressing to the semi-finals. The winners of the semi-finals played against each other in the final. With South Africa returning in the fifth tournament in 1992 as a result of the end of the apartheid boycott, nine teams played each other once in the group phase, and the top four teams progressed to the semi-finals. The tournament was further expanded in 1996, with two groups of six teams. The top four teams from each group progressed to quarter-finals and semi-finals.",
"title": "Format"
},
{
"paragraph_id": 25,
"text": "A distinct format was used for the 1999 and 2003 World Cups. The teams were split into two pools, with the top three teams in each pool advancing to the Super 6. The Super 6 teams played the three other teams that advanced from the other group. As they advanced, the teams carried their points forward from previous matches against other teams advancing alongside them, giving them an incentive to perform well in the group stages. The top four teams from the Super 6 stage progressed to the semi-finals, with the winners playing in the final.",
"title": "Format"
},
{
"paragraph_id": 26,
"text": "The format used in the 2007 World Cup involved 16 teams allocated into four groups of four. Within each group, the teams played each other in a round-robin format. Teams earned points for wins and half-points for ties. The top two teams from each group moved forward to the Super 8 round. The Super 8 teams played the other six teams that progressed from the different groups. Teams earned points in the same way as the group stage, but carried their points forward from previous matches against the other teams who qualified from the same group to the Super 8 stage. The top four teams from the Super 8 round advanced to the semi-finals, and the winners of the semi-finals played in the final.",
"title": "Format"
},
{
"paragraph_id": 27,
"text": "The format used in the 2011 and 2015 World Cups featured two groups of seven teams, each playing in a round-robin format. The top four teams from each group proceeded to the knock out stage consisting of quarter-finals, semi-finals and ultimately the final.",
"title": "Format"
},
{
"paragraph_id": 28,
"text": "In the 2019 and 2023 editions of the tournament, the number of teams participating dropped to 10. Each team is scheduled to play against each other once in a round robin format, before entering the semifinals, a similar format to the 1992 World Cup. The 2027 and 2031 World Cups will have 14 teams, with the format same as the 2003 edition.",
"title": "Format"
},
{
"paragraph_id": 29,
"text": "The ICC Cricket World Cup Trophy is presented to the winners of the World Cup. The current trophy was created for the 1999 championships, and was the first permanent prize in the tournament's history. Prior to this, different trophies were made for each World Cup. The trophy was designed and produced in London by a team of craftsmen from Garrard & Co over a period of two months.",
"title": "Trophy"
},
{
"paragraph_id": 30,
"text": "The current trophy is made from silver and gilt, and features a golden globe held up by three silver columns. The columns, shaped as stumps and bails, represent the three fundamental aspects of cricket: batting, bowling and fielding, while the globe characterises a cricket ball. The seam is tilted to symbolize the axial tilt of the Earth. It stands 60 centimetres (24 in) high and weighs approximately 11 kilograms (24 lb). The names of the previous winners are engraved on the base of the trophy, with space for a total of twenty inscriptions. The ICC keeps the original trophy. A replica differing only in the inscriptions is permanently awarded to the winning team.",
"title": "Trophy"
},
{
"paragraph_id": 31,
"text": "The tournament is one of the world's most-viewed sporting events, and successive tournaments have generated increasing media attention as One-Day International cricket has become more established. The 2011 Cricket World Cup was televised in over 200 countries to over 2.2 billion viewers. Television rights, mainly for the 2011 and 2015 World Cup, were sold for over US$1.1 billion, and sponsorship rights were sold for a further US$500 million. The ICC claimed a total of 1.6 billion viewers for the 2019 World Cup as well as 4.6 billion views of digital video of the tournament. The most-watched match of the tournament was the group game between India and Pakistan, which was watched by more than 300 million people live.",
"title": "Media coverage"
},
{
"paragraph_id": 32,
"text": "The International Cricket Council's executive committee votes for the hosts of the tournament after examining the bids made by the nations keen to hold a Cricket World Cup.",
"title": "Selection of hosts"
},
{
"paragraph_id": 33,
"text": "England hosted the first three competitions. The ICC decided that England should host the first tournament because it was ready to devote the resources required to organising the inaugural event. India volunteered to host the third Cricket World Cup, but most ICC members preferred England as the longer period of daylight in England in June meant that a match could be completed in one day. The 1987 Cricket World Cup was held in India and Pakistan, the first hosted outside England.",
"title": "Selection of hosts"
},
{
"paragraph_id": 34,
"text": "Many of the tournaments have been jointly hosted by nations from the same geographical region, such as South Asia in 1987, 1996 and 2011, Australasia (in Australia and New Zealand) in 1992 and 2015, Southern Africa in 2003 and West Indies in 2007.",
"title": "Selection of hosts"
},
{
"paragraph_id": 35,
"text": "In November 2021, ICC published the name of the hosts for ICC events to be played between 2024 and 2031 cycle. The hosts for the 50-over World Cup along with T20 World Cup and Champions Trophy were selected through a competitive bidding process.",
"title": "Selection of hosts"
},
{
"paragraph_id": 36,
"text": "Twenty nations have qualified for the Cricket World Cup at least once. Six teams have competed in every tournament, five of which have won the title. The West Indies won the first two tournaments, Australia has won six, India has won two, while Pakistan, Sri Lanka and England have each won once. The West Indies (1975 and 1979) and Australia (1999, 2003 and 2007) are the only teams to have won consecutive titles. Australia has played in eight of the thirteen finals (1975, 1987, 1996, 1999, 2003, 2007, 2015 and 2023). New Zealand has yet to win the World Cup, but has been runners-up two times (2015 and 2019). The best result by a non-Test playing nation is the semi-final appearance by Kenya in the 2003 tournament; while the best result by a non-Test playing team on their debut is the Super 8 (second round) by Ireland in 2007.",
"title": "Tournament summary"
},
{
"paragraph_id": 37,
"text": "Sri Lanka, as a co-host of the 1996 World Cup, was the first host to win the tournament, though the final was held in Pakistan. India won in 2011 as host and was the first team to win a final played in their own country. Australia and England repeated the feat in 2015 and 2019 respectively. Other than this, England made it to the final as a host in 1979. Other countries which have achieved or equalled their best World Cup results while co-hosting the tournament are New Zealand as finalists in 2015, Zimbabwe who reached the Super Six in 2003, and Kenya as semi-finalists in 2003. In 1987, co-hosts India and Pakistan both reached the semi-finals, but were eliminated by England and Australia respectively. Australia in 1992, England in 1999, South Africa in 2003, and Bangladesh in 2011 have been host teams that were eliminated in the first round.",
"title": "Tournament summary"
},
{
"paragraph_id": 38,
"text": "An overview of the teams' performances in every World Cup is given below. For each tournament, the number of teams in each finals tournament (in brackets) are shown.",
"title": "Tournament summary"
},
{
"paragraph_id": 39,
"text": "Legend",
"title": "Tournament summary"
},
{
"paragraph_id": 40,
"text": "The table below provides an overview of the performances of teams over past World Cups, as of the end of the 2019 tournament. Teams are sorted by best performance, then by appearances, total number of wins, total number of games, and alphabetical order respectively.",
"title": "Tournament summary"
},
{
"paragraph_id": 41,
"text": "Note:",
"title": "Tournament summary"
}
] | The Cricket World Cup is the international championship of One Day International (ODI) cricket. The event is organised by the sport's governing body, the International Cricket Council (ICC), every four years, with preliminary qualification rounds leading up to a finals tournament. The tournament is one of the world's most viewed sporting events and considered as the "flagship event of the international cricket calendar" by the ICC. It is widely considered the pinnacle championship of the sport of cricket. The first World Cup was organised in England in June 1975, with the first ODI cricket match having been played only four years earlier. However, a separate Women's Cricket World Cup had been held two years before the first men's tournament, and a tournament involving multiple international teams had been held as early as 1912, when a triangular tournament of Test matches was played between Australia, England and South Africa. The first three World Cups were held in England. From the 1987 tournament onwards, hosting has been shared between countries under an unofficial rotation system, with fourteen ICC members having hosted at least one match in the tournament. The current format involves a qualification phase, which takes place over the preceding three years, to determine which teams qualify for the tournament phase. In the tournament phase, 10 teams, including the automatically qualifying host nation, compete for the title at venues within the host nation over about a month. In the 2027 edition, the format will be changed to accommodate an expanded 14-team final competition. A total of twenty teams have competed in the 13 editions of the tournament, with ten teams competing in the recent 2023 tournament. Australia has won the tournament six times, India and West Indies twice each, while Pakistan, Sri Lanka and England have won it once each. The best performance by a non-full-member team came when Kenya made the semi-finals of the 2003 tournament. Australia is the current champion after winning the 2023 World Cup in India. The subsequent 2027 World Cup will be held jointly in South Africa, Zimbabwe, and Namibia. | 2001-11-26T01:50:31Z | 2023-12-31T14:17:08Z | [
"Template:Efn",
"Template:Cite news",
"Template:Official website",
"Template:Small",
"Template:Portal",
"Template:Better source needed",
"Template:Flagicon",
"Template:Smalldiv",
"Template:Cite book",
"Template:Authority control",
"Template:Cr",
"Template:CSS image crop",
"Template:Flag",
"Template:Location map ",
"Template:Cricon",
"Template:Reflist",
"Template:Navboxes",
"Template:Short description",
"Template:Pp",
"Template:Convert",
"Template:Sort",
"Template:Main",
"Template:Notelist",
"Template:Abbr",
"Template:Ubl",
"Template:Bg",
"Template:Cite web",
"Template:Use dmy dates",
"Template:Infobox cricket tournament main",
"Template:Clarify",
"Template:Diagonal split header 2",
"Template:Tooltip",
"Template:Nowrap",
"Template:Webarchive",
"Template:About",
"Template:Season sidebar",
"Template:Dubious"
] | https://en.wikipedia.org/wiki/Cricket_World_Cup |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.